Jan 28 15:03:07 crc systemd[1]: Starting Kubernetes Kubelet... Jan 28 15:03:07 crc restorecon[4709]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:07 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:03:08 crc restorecon[4709]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 15:03:08 crc restorecon[4709]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 28 15:03:09 crc kubenswrapper[4981]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:03:09 crc kubenswrapper[4981]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 28 15:03:09 crc kubenswrapper[4981]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:03:09 crc kubenswrapper[4981]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:03:09 crc kubenswrapper[4981]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 28 15:03:09 crc kubenswrapper[4981]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.031384 4981 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038401 4981 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038418 4981 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038423 4981 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038428 4981 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038432 4981 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038437 4981 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038441 4981 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038445 4981 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038449 4981 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038453 4981 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038458 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038463 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038468 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038473 4981 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038479 4981 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038485 4981 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038490 4981 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038495 4981 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038499 4981 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038511 4981 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038516 4981 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038520 4981 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038524 4981 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038528 4981 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038532 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038536 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038540 4981 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038545 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038550 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038554 4981 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038558 4981 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038562 4981 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038567 4981 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038571 4981 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038575 4981 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038580 4981 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038590 4981 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038594 4981 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038598 4981 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038602 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038606 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038610 4981 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038614 4981 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038618 4981 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038622 4981 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038626 4981 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038630 4981 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038634 4981 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038638 4981 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038642 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038646 4981 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038650 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038654 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038658 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038662 4981 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038666 4981 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038669 4981 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038673 4981 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038677 4981 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038682 4981 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038687 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038692 4981 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038696 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038700 4981 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038705 4981 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038710 4981 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038714 4981 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038719 4981 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038723 4981 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038727 4981 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.038730 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040013 4981 flags.go:64] FLAG: --address="0.0.0.0" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040026 4981 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040034 4981 flags.go:64] FLAG: --anonymous-auth="true" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040040 4981 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040046 4981 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040050 4981 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040056 4981 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040062 4981 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040066 4981 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040071 4981 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040076 4981 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040081 4981 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040086 4981 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040090 4981 flags.go:64] FLAG: --cgroup-root="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040095 4981 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040099 4981 flags.go:64] FLAG: --client-ca-file="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040103 4981 flags.go:64] FLAG: --cloud-config="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040108 4981 flags.go:64] FLAG: --cloud-provider="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040112 4981 flags.go:64] FLAG: --cluster-dns="[]" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040119 4981 flags.go:64] FLAG: --cluster-domain="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040123 4981 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040128 4981 flags.go:64] FLAG: --config-dir="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040132 4981 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040137 4981 flags.go:64] FLAG: --container-log-max-files="5" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040143 4981 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040148 4981 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040153 4981 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040158 4981 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040162 4981 flags.go:64] FLAG: --contention-profiling="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040166 4981 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040171 4981 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040175 4981 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040180 4981 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040205 4981 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040209 4981 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040213 4981 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040218 4981 flags.go:64] FLAG: --enable-load-reader="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040222 4981 flags.go:64] FLAG: --enable-server="true" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040227 4981 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040232 4981 flags.go:64] FLAG: --event-burst="100" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040237 4981 flags.go:64] FLAG: --event-qps="50" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040241 4981 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040246 4981 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040250 4981 flags.go:64] FLAG: --eviction-hard="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040256 4981 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040261 4981 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040266 4981 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040271 4981 flags.go:64] FLAG: --eviction-soft="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040276 4981 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040280 4981 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040285 4981 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040290 4981 flags.go:64] FLAG: --experimental-mounter-path="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040294 4981 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040298 4981 flags.go:64] FLAG: --fail-swap-on="true" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040302 4981 flags.go:64] FLAG: --feature-gates="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040307 4981 flags.go:64] FLAG: --file-check-frequency="20s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040312 4981 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040316 4981 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040320 4981 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040324 4981 flags.go:64] FLAG: --healthz-port="10248" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040328 4981 flags.go:64] FLAG: --help="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040332 4981 flags.go:64] FLAG: --hostname-override="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040336 4981 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040340 4981 flags.go:64] FLAG: --http-check-frequency="20s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040344 4981 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040350 4981 flags.go:64] FLAG: --image-credential-provider-config="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040354 4981 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040359 4981 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040364 4981 flags.go:64] FLAG: --image-service-endpoint="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040368 4981 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040373 4981 flags.go:64] FLAG: --kube-api-burst="100" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040377 4981 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040381 4981 flags.go:64] FLAG: --kube-api-qps="50" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040385 4981 flags.go:64] FLAG: --kube-reserved="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040389 4981 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040394 4981 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040398 4981 flags.go:64] FLAG: --kubelet-cgroups="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040402 4981 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040407 4981 flags.go:64] FLAG: --lock-file="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040411 4981 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040415 4981 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040419 4981 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040426 4981 flags.go:64] FLAG: --log-json-split-stream="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040430 4981 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040434 4981 flags.go:64] FLAG: --log-text-split-stream="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040439 4981 flags.go:64] FLAG: --logging-format="text" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040443 4981 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040448 4981 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040452 4981 flags.go:64] FLAG: --manifest-url="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040456 4981 flags.go:64] FLAG: --manifest-url-header="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040462 4981 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040466 4981 flags.go:64] FLAG: --max-open-files="1000000" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040472 4981 flags.go:64] FLAG: --max-pods="110" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040476 4981 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040481 4981 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040485 4981 flags.go:64] FLAG: --memory-manager-policy="None" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040489 4981 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040493 4981 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040498 4981 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040502 4981 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040511 4981 flags.go:64] FLAG: --node-status-max-images="50" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040515 4981 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040520 4981 flags.go:64] FLAG: --oom-score-adj="-999" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040524 4981 flags.go:64] FLAG: --pod-cidr="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040530 4981 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040537 4981 flags.go:64] FLAG: --pod-manifest-path="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040541 4981 flags.go:64] FLAG: --pod-max-pids="-1" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040546 4981 flags.go:64] FLAG: --pods-per-core="0" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040550 4981 flags.go:64] FLAG: --port="10250" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040555 4981 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040559 4981 flags.go:64] FLAG: --provider-id="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040563 4981 flags.go:64] FLAG: --qos-reserved="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040568 4981 flags.go:64] FLAG: --read-only-port="10255" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040573 4981 flags.go:64] FLAG: --register-node="true" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040577 4981 flags.go:64] FLAG: --register-schedulable="true" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040581 4981 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040588 4981 flags.go:64] FLAG: --registry-burst="10" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040592 4981 flags.go:64] FLAG: --registry-qps="5" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040596 4981 flags.go:64] FLAG: --reserved-cpus="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040600 4981 flags.go:64] FLAG: --reserved-memory="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040605 4981 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040610 4981 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040614 4981 flags.go:64] FLAG: --rotate-certificates="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040618 4981 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040623 4981 flags.go:64] FLAG: --runonce="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040627 4981 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040631 4981 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040636 4981 flags.go:64] FLAG: --seccomp-default="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040640 4981 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040644 4981 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040649 4981 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040654 4981 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040658 4981 flags.go:64] FLAG: --storage-driver-password="root" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040662 4981 flags.go:64] FLAG: --storage-driver-secure="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040666 4981 flags.go:64] FLAG: --storage-driver-table="stats" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040670 4981 flags.go:64] FLAG: --storage-driver-user="root" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040675 4981 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040679 4981 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040683 4981 flags.go:64] FLAG: --system-cgroups="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040687 4981 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040695 4981 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040699 4981 flags.go:64] FLAG: --tls-cert-file="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040703 4981 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040710 4981 flags.go:64] FLAG: --tls-min-version="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040714 4981 flags.go:64] FLAG: --tls-private-key-file="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040718 4981 flags.go:64] FLAG: --topology-manager-policy="none" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040729 4981 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040733 4981 flags.go:64] FLAG: --topology-manager-scope="container" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040737 4981 flags.go:64] FLAG: --v="2" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040743 4981 flags.go:64] FLAG: --version="false" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040749 4981 flags.go:64] FLAG: --vmodule="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040755 4981 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.040759 4981 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040863 4981 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040868 4981 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040872 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040877 4981 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040881 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040886 4981 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040892 4981 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040909 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040915 4981 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040920 4981 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040925 4981 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040929 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040933 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040937 4981 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040940 4981 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040944 4981 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040948 4981 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040951 4981 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040955 4981 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040959 4981 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040963 4981 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040966 4981 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040970 4981 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040976 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040982 4981 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040985 4981 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040989 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040994 4981 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.040999 4981 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041003 4981 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041007 4981 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041011 4981 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041014 4981 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041018 4981 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041023 4981 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041027 4981 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041031 4981 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041034 4981 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041038 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041042 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041049 4981 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041053 4981 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041056 4981 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041061 4981 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041065 4981 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041069 4981 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041072 4981 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041075 4981 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041079 4981 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041082 4981 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041086 4981 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041089 4981 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041093 4981 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041096 4981 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041101 4981 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041105 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041110 4981 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041113 4981 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041117 4981 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041121 4981 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041125 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041128 4981 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041132 4981 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041135 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041139 4981 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041142 4981 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041146 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041149 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041153 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041156 4981 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.041159 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.041170 4981 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.054294 4981 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.054715 4981 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054835 4981 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054856 4981 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054865 4981 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054875 4981 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054885 4981 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054894 4981 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054903 4981 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054911 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054919 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054927 4981 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054935 4981 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054942 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054950 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054960 4981 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054972 4981 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054982 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054990 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.054999 4981 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055009 4981 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055021 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055029 4981 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055037 4981 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055046 4981 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055053 4981 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055061 4981 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055069 4981 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055078 4981 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055085 4981 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055093 4981 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055101 4981 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055109 4981 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055117 4981 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055124 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055132 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055140 4981 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055149 4981 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055157 4981 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055164 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055172 4981 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055180 4981 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055210 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055218 4981 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055227 4981 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055234 4981 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055242 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055250 4981 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055260 4981 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055270 4981 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055279 4981 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055287 4981 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055297 4981 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055305 4981 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055312 4981 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055320 4981 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055329 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055337 4981 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055344 4981 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055353 4981 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055363 4981 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055373 4981 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055385 4981 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055395 4981 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055404 4981 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055412 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055421 4981 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055429 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055437 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055445 4981 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055453 4981 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055461 4981 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055469 4981 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.055482 4981 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055701 4981 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055713 4981 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055722 4981 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055730 4981 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055738 4981 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055746 4981 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055753 4981 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055761 4981 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055769 4981 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055779 4981 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055787 4981 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055794 4981 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055802 4981 feature_gate.go:330] unrecognized feature gate: Example Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055810 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055819 4981 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055827 4981 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055835 4981 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055843 4981 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055851 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055859 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055867 4981 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055874 4981 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055882 4981 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055890 4981 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055898 4981 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055906 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055914 4981 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055922 4981 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055931 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055940 4981 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055948 4981 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055957 4981 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055965 4981 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055973 4981 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055981 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055990 4981 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.055997 4981 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056005 4981 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056012 4981 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056020 4981 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056028 4981 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056039 4981 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056049 4981 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056059 4981 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056068 4981 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056076 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056084 4981 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056092 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056101 4981 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056109 4981 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056118 4981 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056126 4981 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056134 4981 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056141 4981 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056149 4981 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056157 4981 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056165 4981 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056175 4981 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056204 4981 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056213 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056223 4981 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056233 4981 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056242 4981 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056251 4981 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056259 4981 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056268 4981 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056277 4981 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056285 4981 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056294 4981 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056304 4981 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.056315 4981 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.056327 4981 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.056554 4981 server.go:940] "Client rotation is on, will bootstrap in background" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.062064 4981 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.062220 4981 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.064122 4981 server.go:997] "Starting client certificate rotation" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.064169 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.065925 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-12 19:14:44.589845326 +0000 UTC Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.066055 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.096633 4981 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.099605 4981 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:03:09 crc kubenswrapper[4981]: E0128 15:03:09.100041 4981 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.120287 4981 log.go:25] "Validated CRI v1 runtime API" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.169233 4981 log.go:25] "Validated CRI v1 image API" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.171728 4981 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.179737 4981 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-28-14-58-20-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.179786 4981 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.207694 4981 manager.go:217] Machine: {Timestamp:2026-01-28 15:03:09.203471192 +0000 UTC m=+0.655629513 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654112256 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:bdcb13d9-b39a-47f8-8de2-451381277fbd BootID:e730fd4b-ce6e-4137-9fbe-a43501684872 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108168 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827056128 Type:vfs Inodes:4108168 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827056128 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ed:e4:37 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ed:e4:37 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:fa:ad:bd Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:c1:a8:57 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f2:b9:76 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:20:7a:b9 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:c2:34:bb:89:c3:69 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:1a:9a:70:9e:5c:7c Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654112256 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.208136 4981 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.208547 4981 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.210927 4981 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.211504 4981 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.211581 4981 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.211935 4981 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.211955 4981 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.212629 4981 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.212702 4981 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.213112 4981 state_mem.go:36] "Initialized new in-memory state store" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.213348 4981 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.217338 4981 kubelet.go:418] "Attempting to sync node with API server" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.217387 4981 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.217440 4981 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.217469 4981 kubelet.go:324] "Adding apiserver pod source" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.217503 4981 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.221613 4981 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.222446 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.222475 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Jan 28 15:03:09 crc kubenswrapper[4981]: E0128 15:03:09.222569 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:03:09 crc kubenswrapper[4981]: E0128 15:03:09.222586 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.222606 4981 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.225237 4981 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.226751 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.226781 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.226791 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.226800 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.226817 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.226827 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.226837 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.226852 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.226863 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.226872 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.226911 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.226921 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.229049 4981 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.232940 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.234363 4981 server.go:1280] "Started kubelet" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.235149 4981 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.235650 4981 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.239829 4981 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 15:03:09 crc systemd[1]: Started Kubernetes Kubelet. Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.242171 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.242242 4981 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.242400 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:53:19.36400439 +0000 UTC Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.242460 4981 server.go:460] "Adding debug handlers to kubelet server" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.242682 4981 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.242703 4981 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 28 15:03:09 crc kubenswrapper[4981]: E0128 15:03:09.242751 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.242801 4981 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 15:03:09 crc kubenswrapper[4981]: E0128 15:03:09.242083 4981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188eed462b5c4db5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:03:09.234146741 +0000 UTC m=+0.686305012,LastTimestamp:2026-01-28 15:03:09.234146741 +0000 UTC m=+0.686305012,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.243566 4981 factory.go:55] Registering systemd factory Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.243624 4981 factory.go:221] Registration of the systemd container factory successfully Jan 28 15:03:09 crc kubenswrapper[4981]: E0128 15:03:09.243575 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="200ms" Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.243662 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Jan 28 15:03:09 crc kubenswrapper[4981]: E0128 15:03:09.243788 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.243981 4981 factory.go:153] Registering CRI-O factory Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.244010 4981 factory.go:221] Registration of the crio container factory successfully Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.244085 4981 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.244113 4981 factory.go:103] Registering Raw factory Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.244136 4981 manager.go:1196] Started watching for new ooms in manager Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.244914 4981 manager.go:319] Starting recovery of all containers Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255201 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255307 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255325 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255365 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255379 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255393 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255404 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255447 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255531 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255639 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255662 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255697 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255713 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255810 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255828 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255843 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255926 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255939 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255971 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255986 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.255999 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.256011 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.256089 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.256178 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.256353 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.256367 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.256474 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.256492 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.256505 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.256518 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.256552 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.256574 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.256647 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.256662 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.259663 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.259767 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.259795 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.259832 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.259855 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.259884 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.259904 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.259925 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.259956 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.259975 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260001 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260025 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260609 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260641 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260658 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260686 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260702 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260724 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260755 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260789 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260806 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260835 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260867 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260884 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260907 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260925 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260948 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260974 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.260999 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.261029 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.261059 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.261089 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.261114 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.261132 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.261159 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.261181 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.261210 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.261230 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.261245 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.261267 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262453 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262484 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262507 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262520 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262538 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262559 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262573 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262593 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262606 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262618 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262635 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262651 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262671 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262685 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262699 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262720 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262734 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262769 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262783 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262795 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262815 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262828 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262849 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262861 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262875 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262895 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262908 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262933 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262947 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262960 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.262992 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263016 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263040 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263053 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263079 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263096 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263120 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263148 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263163 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263183 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263228 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263246 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263259 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263275 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263287 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263300 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263378 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.263876 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.268765 4981 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.268896 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269017 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269068 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269092 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269113 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269133 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269156 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269245 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269264 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269284 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269307 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269326 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269351 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269377 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269403 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269432 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269460 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269484 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269510 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269535 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.269577 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.271443 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.271485 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.271507 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.271529 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.271549 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.271570 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.271589 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.271617 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.271638 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.271658 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.271714 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272299 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272402 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272441 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272479 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272519 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272561 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272591 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272621 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272655 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272687 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272718 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272799 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272828 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272857 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272885 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272915 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272943 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.272975 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273005 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273033 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273060 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273091 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273123 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273165 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273225 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273257 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273287 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273319 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273347 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273377 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273405 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273432 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273460 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273500 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273528 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273557 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273588 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273623 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273654 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273682 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273725 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273752 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273780 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273809 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273834 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273857 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273879 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273900 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273922 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273941 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273962 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.273982 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.274004 4981 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.274026 4981 reconstruct.go:97] "Volume reconstruction finished" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.274040 4981 reconciler.go:26] "Reconciler: start to sync state" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.289375 4981 manager.go:324] Recovery completed Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.304127 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.306465 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.306516 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.306533 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.307792 4981 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.307828 4981 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.307860 4981 state_mem.go:36] "Initialized new in-memory state store" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.314062 4981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.317311 4981 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.317380 4981 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.317417 4981 kubelet.go:2335] "Starting kubelet main sync loop" Jan 28 15:03:09 crc kubenswrapper[4981]: E0128 15:03:09.317480 4981 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.320830 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Jan 28 15:03:09 crc kubenswrapper[4981]: E0128 15:03:09.320949 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.327675 4981 policy_none.go:49] "None policy: Start" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.328638 4981 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.328685 4981 state_mem.go:35] "Initializing new in-memory state store" Jan 28 15:03:09 crc kubenswrapper[4981]: E0128 15:03:09.343677 4981 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.390798 4981 manager.go:334] "Starting Device Plugin manager" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.390878 4981 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.390900 4981 server.go:79] "Starting device plugin registration server" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.391577 4981 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.391609 4981 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.392497 4981 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.392688 4981 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.392712 4981 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 15:03:09 crc kubenswrapper[4981]: E0128 15:03:09.409901 4981 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.418153 4981 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.418348 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.420125 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.420238 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.420315 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.420589 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.420953 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.421104 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.422731 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.422776 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.422794 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.422999 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.423067 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.423101 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.423019 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.423337 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.423453 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.424783 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.424832 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.424849 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.424998 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.425027 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.425044 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.425334 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.425578 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.425638 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.426912 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.426939 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.426950 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.427071 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.427121 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.427156 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.427224 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.427355 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.427440 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.428563 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.428624 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.428658 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.428895 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.428946 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.428969 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.429027 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.429108 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.430543 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.430576 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.430589 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4981]: E0128 15:03:09.445148 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="400ms" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.476331 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.476382 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.476418 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.476453 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.476510 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.476559 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.476605 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.476629 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.476676 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.476829 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.476892 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.476934 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.476969 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.477011 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.477054 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.491944 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.493570 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.493675 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.493745 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.493838 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:03:09 crc kubenswrapper[4981]: E0128 15:03:09.494498 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578407 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578519 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578555 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578586 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578617 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578650 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578681 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578709 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578738 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578738 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578809 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578869 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578874 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578902 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578830 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578897 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578766 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.579055 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578740 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.578820 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.579173 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.579257 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.579299 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.579275 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.579364 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.579396 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.579423 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.579425 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.579499 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.579474 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.695628 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.696686 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.696756 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.696781 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.696822 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:03:09 crc kubenswrapper[4981]: E0128 15:03:09.697342 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.750158 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.757145 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.785014 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.807880 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-096a58da335c55edd2099f93a9eb50e395b4bed5d580fbf374c4db6b1af2cf2c WatchSource:0}: Error finding container 096a58da335c55edd2099f93a9eb50e395b4bed5d580fbf374c4db6b1af2cf2c: Status 404 returned error can't find the container with id 096a58da335c55edd2099f93a9eb50e395b4bed5d580fbf374c4db6b1af2cf2c Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.809381 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.809568 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-7c44ddd470b218fda6a2e6d35bc169b2aa161e8654c360a61c430d03389db960 WatchSource:0}: Error finding container 7c44ddd470b218fda6a2e6d35bc169b2aa161e8654c360a61c430d03389db960: Status 404 returned error can't find the container with id 7c44ddd470b218fda6a2e6d35bc169b2aa161e8654c360a61c430d03389db960 Jan 28 15:03:09 crc kubenswrapper[4981]: I0128 15:03:09.814330 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.820143 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-018ed3b57dcdc1dec7bb3ce47793d9c17530222167ea618bcfb60d689124deb5 WatchSource:0}: Error finding container 018ed3b57dcdc1dec7bb3ce47793d9c17530222167ea618bcfb60d689124deb5: Status 404 returned error can't find the container with id 018ed3b57dcdc1dec7bb3ce47793d9c17530222167ea618bcfb60d689124deb5 Jan 28 15:03:09 crc kubenswrapper[4981]: W0128 15:03:09.838472 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c542752cda1f879b4cdd99d2cbb2eb2f2942d16f5fd66e093fb3cbcff2f69273 WatchSource:0}: Error finding container c542752cda1f879b4cdd99d2cbb2eb2f2942d16f5fd66e093fb3cbcff2f69273: Status 404 returned error can't find the container with id c542752cda1f879b4cdd99d2cbb2eb2f2942d16f5fd66e093fb3cbcff2f69273 Jan 28 15:03:09 crc kubenswrapper[4981]: E0128 15:03:09.847128 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="800ms" Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.098058 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.100176 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.100241 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.100252 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.100282 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:03:10 crc kubenswrapper[4981]: E0128 15:03:10.100741 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.234091 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.243146 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 17:45:19.191733792 +0000 UTC Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.325401 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c542752cda1f879b4cdd99d2cbb2eb2f2942d16f5fd66e093fb3cbcff2f69273"} Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.326639 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ed4644ac3912f75976f01130b5e8cb820560337211103897dc92f3b3bbd76b14"} Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.329001 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"018ed3b57dcdc1dec7bb3ce47793d9c17530222167ea618bcfb60d689124deb5"} Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.330247 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"096a58da335c55edd2099f93a9eb50e395b4bed5d580fbf374c4db6b1af2cf2c"} Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.331455 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7c44ddd470b218fda6a2e6d35bc169b2aa161e8654c360a61c430d03389db960"} Jan 28 15:03:10 crc kubenswrapper[4981]: W0128 15:03:10.457854 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Jan 28 15:03:10 crc kubenswrapper[4981]: E0128 15:03:10.459805 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:03:10 crc kubenswrapper[4981]: E0128 15:03:10.648052 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="1.6s" Jan 28 15:03:10 crc kubenswrapper[4981]: W0128 15:03:10.674117 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Jan 28 15:03:10 crc kubenswrapper[4981]: E0128 15:03:10.674319 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:03:10 crc kubenswrapper[4981]: W0128 15:03:10.715414 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Jan 28 15:03:10 crc kubenswrapper[4981]: E0128 15:03:10.715527 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:03:10 crc kubenswrapper[4981]: W0128 15:03:10.781612 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Jan 28 15:03:10 crc kubenswrapper[4981]: E0128 15:03:10.781765 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.901614 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.904092 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.904146 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.904171 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:10 crc kubenswrapper[4981]: I0128 15:03:10.904222 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:03:10 crc kubenswrapper[4981]: E0128 15:03:10.904784 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.153227 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:03:11 crc kubenswrapper[4981]: E0128 15:03:11.154166 4981 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.234607 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.244105 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:49:54.839090133 +0000 UTC Jan 28 15:03:11 crc kubenswrapper[4981]: E0128 15:03:11.306681 4981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188eed462b5c4db5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:03:09.234146741 +0000 UTC m=+0.686305012,LastTimestamp:2026-01-28 15:03:09.234146741 +0000 UTC m=+0.686305012,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.340156 4981 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2b068caf45c9ac97aab90df8270584bc76936e8d096c49e87a51f0eaebd74f6a" exitCode=0 Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.340303 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.340284 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2b068caf45c9ac97aab90df8270584bc76936e8d096c49e87a51f0eaebd74f6a"} Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.341060 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.341099 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.341111 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.342956 4981 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="03b4ad4c73d139cb2ab8966a0ebfe6edf1642de2069cbe4f080d209792127e19" exitCode=0 Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.343025 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"03b4ad4c73d139cb2ab8966a0ebfe6edf1642de2069cbe4f080d209792127e19"} Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.343049 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.343987 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.344045 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.344066 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.345140 4981 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a" exitCode=0 Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.345227 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a"} Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.345371 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.346583 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.346617 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.346630 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.348289 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d"} Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.348326 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541"} Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.348342 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b"} Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.348356 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e"} Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.348366 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.349252 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.349281 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.349294 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.350273 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3" exitCode=0 Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.350308 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3"} Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.350465 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.351692 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.351733 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.351746 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.353300 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.354201 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.354232 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.354242 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:11 crc kubenswrapper[4981]: I0128 15:03:11.426080 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:12 crc kubenswrapper[4981]: W0128 15:03:12.192241 4981 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Jan 28 15:03:12 crc kubenswrapper[4981]: E0128 15:03:12.192385 4981 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.151:6443: connect: connection refused" logger="UnhandledError" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.234420 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.151:6443: connect: connection refused Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.244627 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 14:02:03.233596401 +0000 UTC Jan 28 15:03:12 crc kubenswrapper[4981]: E0128 15:03:12.249627 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="3.2s" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.355500 4981 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5bd0c3917fe397632124192605ba7c24a7b4f11ee76b1bbe8985be91c3a4f2c4" exitCode=0 Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.355589 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5bd0c3917fe397632124192605ba7c24a7b4f11ee76b1bbe8985be91c3a4f2c4"} Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.355703 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.356757 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.356809 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.356820 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.369465 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1dc14b3cf4e388495b2e92f6b68c6f252a0896d6d92fc7bf6786b0ae938e8ba2"} Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.369813 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.376160 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.376225 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.376243 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.381822 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5e44b16b8020efea12a40e946909e999169518fb90219b88c84df8eb2696b249"} Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.381892 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.381894 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4fdec11bcd96c80a3dcffa4a5da6e5541079caace1911ad9d3387310299c033b"} Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.382211 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"401004bf4f52b13621d039da3ad10fa2800e605b8e574b16a9200f0447169a8a"} Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.383566 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.383597 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.383607 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.388247 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.388639 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4"} Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.388721 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a"} Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.388734 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f"} Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.388746 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946"} Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.389245 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.389268 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.389279 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.505110 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.508956 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.509006 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.509019 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:12 crc kubenswrapper[4981]: I0128 15:03:12.509050 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:03:12 crc kubenswrapper[4981]: E0128 15:03:12.509612 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.151:6443: connect: connection refused" node="crc" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.245394 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 15:09:32.63933756 +0000 UTC Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.397017 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"83688abeeebdcce1bb47537b185199fb1628190c3a9cafb7fc188463d50dd92f"} Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.397234 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.398746 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.398791 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.398810 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.400964 4981 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="89291c3292fbc8cd17f6d6fa8e0401466abab335a93a8fea90e1d1e2c474b13d" exitCode=0 Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.401072 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"89291c3292fbc8cd17f6d6fa8e0401466abab335a93a8fea90e1d1e2c474b13d"} Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.401145 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.401251 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.401345 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.401374 4981 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.401444 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.402980 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.403027 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.403045 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.402991 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.403335 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.403356 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.403631 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.403696 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.403765 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.404082 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.404125 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.404143 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:13 crc kubenswrapper[4981]: I0128 15:03:13.486398 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.069658 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.245646 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 02:05:50.227915582 +0000 UTC Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.410835 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"63cb1662d8b5fec2bdbad6df42b9f54ed7e6b0376a05267274c3970595f9913b"} Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.410906 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5e70764badf60d13980a8be47ecdddc7a3bc84bf33976bf0acac03bf0efa4516"} Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.410911 4981 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.410975 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.410991 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.410927 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"24fab6a5055131a3cdf462827ac71262d1799a6d9f8fbb1037e12d9c9116fb56"} Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.412451 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.412498 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.412514 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.413140 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.413235 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.413247 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.501521 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.586563 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.586864 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.588442 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.588503 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.588516 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:14 crc kubenswrapper[4981]: I0128 15:03:14.597537 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.012846 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.246498 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 03:40:55.30773757 +0000 UTC Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.324145 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.421895 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5e34662dde5d453bf74814ed0fd4a9603c00a3c90f8d9ee3a19ca0f7098204aa"} Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.421983 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.422033 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f202039989f4f225b610c33028e4ef8d6ee911663ab00444f85968a6d9fc866e"} Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.422090 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.422042 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.423986 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.424090 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.424110 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.424161 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.424178 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.424257 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.424274 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.424298 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.424277 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.710622 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.712164 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.712270 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.712287 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:15 crc kubenswrapper[4981]: I0128 15:03:15.712321 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.157699 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.247008 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 06:48:26.794429208 +0000 UTC Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.425351 4981 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.425414 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.425498 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.425439 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.427408 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.427454 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.427490 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.427503 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.427515 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.427527 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.427528 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.427606 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.427630 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:16 crc kubenswrapper[4981]: I0128 15:03:16.828834 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:17 crc kubenswrapper[4981]: I0128 15:03:17.248052 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:28:37.498304027 +0000 UTC Jan 28 15:03:17 crc kubenswrapper[4981]: I0128 15:03:17.429045 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:17 crc kubenswrapper[4981]: I0128 15:03:17.429283 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:17 crc kubenswrapper[4981]: I0128 15:03:17.430578 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:17 crc kubenswrapper[4981]: I0128 15:03:17.430646 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:17 crc kubenswrapper[4981]: I0128 15:03:17.430674 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:17 crc kubenswrapper[4981]: I0128 15:03:17.431480 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:17 crc kubenswrapper[4981]: I0128 15:03:17.431574 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:17 crc kubenswrapper[4981]: I0128 15:03:17.431596 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:17 crc kubenswrapper[4981]: I0128 15:03:17.817019 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 28 15:03:17 crc kubenswrapper[4981]: I0128 15:03:17.817259 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:17 crc kubenswrapper[4981]: I0128 15:03:17.818748 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:17 crc kubenswrapper[4981]: I0128 15:03:17.818809 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:17 crc kubenswrapper[4981]: I0128 15:03:17.818828 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:18 crc kubenswrapper[4981]: I0128 15:03:18.013564 4981 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:03:18 crc kubenswrapper[4981]: I0128 15:03:18.013672 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:03:18 crc kubenswrapper[4981]: I0128 15:03:18.248962 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 08:24:40.017121235 +0000 UTC Jan 28 15:03:19 crc kubenswrapper[4981]: I0128 15:03:19.249836 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:35:19.871015928 +0000 UTC Jan 28 15:03:19 crc kubenswrapper[4981]: E0128 15:03:19.410959 4981 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 15:03:20 crc kubenswrapper[4981]: I0128 15:03:20.250232 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 11:43:42.590935904 +0000 UTC Jan 28 15:03:21 crc kubenswrapper[4981]: I0128 15:03:21.250635 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 13:59:16.200464102 +0000 UTC Jan 28 15:03:22 crc kubenswrapper[4981]: I0128 15:03:22.251687 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 08:33:23.93841753 +0000 UTC Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.235441 4981 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.252834 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:31:58.660467174 +0000 UTC Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.317331 4981 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.317419 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.327873 4981 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.327965 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.451951 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.453983 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="83688abeeebdcce1bb47537b185199fb1628190c3a9cafb7fc188463d50dd92f" exitCode=255 Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.454030 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"83688abeeebdcce1bb47537b185199fb1628190c3a9cafb7fc188463d50dd92f"} Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.454293 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.456155 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.456232 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.456249 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.457157 4981 scope.go:117] "RemoveContainer" containerID="83688abeeebdcce1bb47537b185199fb1628190c3a9cafb7fc188463d50dd92f" Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.931531 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.931958 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.933415 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.933452 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.933464 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:23 crc kubenswrapper[4981]: I0128 15:03:23.983383 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 28 15:03:24 crc kubenswrapper[4981]: I0128 15:03:24.253385 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 21:16:21.285794003 +0000 UTC Jan 28 15:03:24 crc kubenswrapper[4981]: I0128 15:03:24.458601 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:03:24 crc kubenswrapper[4981]: I0128 15:03:24.460324 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915"} Jan 28 15:03:24 crc kubenswrapper[4981]: I0128 15:03:24.460417 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:24 crc kubenswrapper[4981]: I0128 15:03:24.460579 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:24 crc kubenswrapper[4981]: I0128 15:03:24.461164 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:24 crc kubenswrapper[4981]: I0128 15:03:24.461228 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:24 crc kubenswrapper[4981]: I0128 15:03:24.461243 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:24 crc kubenswrapper[4981]: I0128 15:03:24.461455 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:24 crc kubenswrapper[4981]: I0128 15:03:24.461507 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:24 crc kubenswrapper[4981]: I0128 15:03:24.461520 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:24 crc kubenswrapper[4981]: I0128 15:03:24.472674 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 28 15:03:24 crc kubenswrapper[4981]: I0128 15:03:24.501528 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:25 crc kubenswrapper[4981]: I0128 15:03:25.253906 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 20:12:17.611186559 +0000 UTC Jan 28 15:03:25 crc kubenswrapper[4981]: I0128 15:03:25.463008 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:25 crc kubenswrapper[4981]: I0128 15:03:25.463030 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:25 crc kubenswrapper[4981]: I0128 15:03:25.464425 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:25 crc kubenswrapper[4981]: I0128 15:03:25.464460 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:25 crc kubenswrapper[4981]: I0128 15:03:25.464470 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:25 crc kubenswrapper[4981]: I0128 15:03:25.464639 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:25 crc kubenswrapper[4981]: I0128 15:03:25.464699 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:25 crc kubenswrapper[4981]: I0128 15:03:25.464724 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:26 crc kubenswrapper[4981]: I0128 15:03:26.166036 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:26 crc kubenswrapper[4981]: I0128 15:03:26.254460 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 00:27:21.504774292 +0000 UTC Jan 28 15:03:26 crc kubenswrapper[4981]: I0128 15:03:26.465508 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:26 crc kubenswrapper[4981]: I0128 15:03:26.466912 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:26 crc kubenswrapper[4981]: I0128 15:03:26.466984 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:26 crc kubenswrapper[4981]: I0128 15:03:26.467002 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:26 crc kubenswrapper[4981]: I0128 15:03:26.473606 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:26 crc kubenswrapper[4981]: I0128 15:03:26.833877 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:26 crc kubenswrapper[4981]: I0128 15:03:26.834044 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:26 crc kubenswrapper[4981]: I0128 15:03:26.835462 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:26 crc kubenswrapper[4981]: I0128 15:03:26.835505 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:26 crc kubenswrapper[4981]: I0128 15:03:26.835516 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:27 crc kubenswrapper[4981]: I0128 15:03:27.254967 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 00:00:51.861067093 +0000 UTC Jan 28 15:03:27 crc kubenswrapper[4981]: I0128 15:03:27.470901 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:27 crc kubenswrapper[4981]: I0128 15:03:27.473328 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:27 crc kubenswrapper[4981]: I0128 15:03:27.473377 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:27 crc kubenswrapper[4981]: I0128 15:03:27.473391 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:28 crc kubenswrapper[4981]: I0128 15:03:28.014347 4981 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 15:03:28 crc kubenswrapper[4981]: I0128 15:03:28.014471 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 15:03:28 crc kubenswrapper[4981]: I0128 15:03:28.255496 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 15:34:31.413603869 +0000 UTC Jan 28 15:03:28 crc kubenswrapper[4981]: E0128 15:03:28.314162 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 28 15:03:28 crc kubenswrapper[4981]: I0128 15:03:28.317166 4981 trace.go:236] Trace[1532414798]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:03:13.559) (total time: 14757ms): Jan 28 15:03:28 crc kubenswrapper[4981]: Trace[1532414798]: ---"Objects listed" error: 14757ms (15:03:28.317) Jan 28 15:03:28 crc kubenswrapper[4981]: Trace[1532414798]: [14.757585451s] [14.757585451s] END Jan 28 15:03:28 crc kubenswrapper[4981]: I0128 15:03:28.317221 4981 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 15:03:28 crc kubenswrapper[4981]: I0128 15:03:28.317819 4981 trace.go:236] Trace[527454662]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:03:15.513) (total time: 12804ms): Jan 28 15:03:28 crc kubenswrapper[4981]: Trace[527454662]: ---"Objects listed" error: 12804ms (15:03:28.317) Jan 28 15:03:28 crc kubenswrapper[4981]: Trace[527454662]: [12.804208373s] [12.804208373s] END Jan 28 15:03:28 crc kubenswrapper[4981]: I0128 15:03:28.317842 4981 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 15:03:28 crc kubenswrapper[4981]: I0128 15:03:28.319411 4981 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 28 15:03:28 crc kubenswrapper[4981]: I0128 15:03:28.319706 4981 trace.go:236] Trace[1972727036]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:03:13.584) (total time: 14734ms): Jan 28 15:03:28 crc kubenswrapper[4981]: Trace[1972727036]: ---"Objects listed" error: 14734ms (15:03:28.319) Jan 28 15:03:28 crc kubenswrapper[4981]: Trace[1972727036]: [14.734664964s] [14.734664964s] END Jan 28 15:03:28 crc kubenswrapper[4981]: I0128 15:03:28.319742 4981 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:03:28 crc kubenswrapper[4981]: E0128 15:03:28.321931 4981 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 28 15:03:28 crc kubenswrapper[4981]: I0128 15:03:28.322497 4981 trace.go:236] Trace[1244381998]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 15:03:13.760) (total time: 14561ms): Jan 28 15:03:28 crc kubenswrapper[4981]: Trace[1244381998]: ---"Objects listed" error: 14561ms (15:03:28.322) Jan 28 15:03:28 crc kubenswrapper[4981]: Trace[1244381998]: [14.561497676s] [14.561497676s] END Jan 28 15:03:28 crc kubenswrapper[4981]: I0128 15:03:28.322557 4981 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 15:03:28 crc kubenswrapper[4981]: I0128 15:03:28.327911 4981 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.230080 4981 apiserver.go:52] "Watching apiserver" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.256091 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 07:27:06.992745834 +0000 UTC Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.414910 4981 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.415177 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.415606 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.415706 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.415750 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.415919 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.415906 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.416783 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.416875 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.416915 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.417082 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.419216 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.419724 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.420100 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.420243 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.420865 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.421509 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.422919 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.423171 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.423656 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.443861 4981 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.457345 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.478255 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.504284 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.526382 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.526689 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.526802 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.526909 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.527018 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.527141 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.527449 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.527585 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.527698 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.527816 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.527921 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.528027 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.528140 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.528326 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.528477 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.528592 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.528694 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.528771 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.529341 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.529378 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.529790 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.530137 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.530234 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.530517 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.530728 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.530842 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.530967 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.531135 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.531333 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.531551 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.531751 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.532027 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.532339 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.528793 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.532728 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.532803 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.532859 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.532897 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.532939 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.532979 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.533016 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.533056 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.533252 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.533663 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534105 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534207 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534250 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534302 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534354 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534391 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534434 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534590 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534608 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534663 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534719 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534767 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534807 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534850 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534892 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534928 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534944 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.534970 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535021 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535055 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535077 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535065 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535225 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535301 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535356 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535395 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535430 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535459 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535461 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535490 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535599 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535654 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535701 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535648 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535758 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535738 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535847 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535853 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535910 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535968 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.535995 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.536014 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.536117 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.536141 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.536164 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.536175 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.536214 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.536636 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.536679 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.536691 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.536757 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.536804 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.536851 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.536852 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.536883 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.536895 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537088 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537089 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537118 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537149 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537170 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537217 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537251 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537272 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537293 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537317 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537321 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537343 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537362 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537388 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537408 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537559 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537841 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537897 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.537959 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538201 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538237 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538260 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538285 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538304 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538323 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538343 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538368 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538384 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538403 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538422 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538443 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538461 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538481 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538489 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538501 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538565 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538589 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538618 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538646 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538745 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538773 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538787 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538801 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538842 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538865 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538891 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538910 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538907 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538933 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538954 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.538973 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.539465 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.539824 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.540612 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.540780 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.540848 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.540877 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.540964 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.541002 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.541026 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.541573 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.541639 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.542283 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.542311 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.542338 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.542360 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.542491 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.542534 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:30.042498536 +0000 UTC m=+21.494656787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.542628 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.542336 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.542801 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.542793 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.542856 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.542885 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.542933 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.542963 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543004 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543028 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543072 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543095 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543118 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543157 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543181 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543232 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543232 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543257 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543326 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543346 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543393 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543416 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543466 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543483 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543505 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543549 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543572 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543590 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543636 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543747 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543757 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543795 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543822 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543868 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543895 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543915 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543962 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.543988 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.544031 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.544052 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.544078 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.544125 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.544148 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.544139 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.544818 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.545398 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.544212 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546149 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546202 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546382 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546411 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546434 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546460 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546494 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546525 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546551 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546574 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546595 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546613 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546639 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546660 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546681 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546703 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546725 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546745 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546767 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546788 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546808 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546829 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546853 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546876 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546894 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546914 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546948 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.547061 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.547083 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.547240 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.547268 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.547298 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.547319 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.545503 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.545606 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.545880 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.545969 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.545669 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.546223 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.547797 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.547969 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.548012 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.548181 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.548292 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.548315 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.548542 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.548678 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.549090 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.549181 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.549535 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.549569 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.549920 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.549656 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.550315 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.550651 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.552115 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.552207 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.552349 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.552296 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.552711 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.552841 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.553026 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.553143 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.553853 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.554344 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.554376 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.554583 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.554786 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.554958 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555102 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555098 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555136 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.547348 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555271 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555367 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555403 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555421 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555424 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555449 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555682 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555722 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555744 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555777 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555798 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555857 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555878 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555903 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555950 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555974 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555997 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556022 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556155 4981 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556170 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556199 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556198 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556212 4981 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556223 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556235 4981 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556247 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556257 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556270 4981 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556282 4981 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556294 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556304 4981 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556315 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556333 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556345 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556356 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556366 4981 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556376 4981 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556388 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556398 4981 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556408 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556419 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556430 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556439 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556449 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556458 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556468 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556479 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556488 4981 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556499 4981 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556516 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556525 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556535 4981 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556543 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556553 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556570 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556580 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556590 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556600 4981 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556610 4981 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556620 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556631 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556641 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556652 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556662 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556679 4981 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556691 4981 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556706 4981 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556717 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556726 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556734 4981 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556743 4981 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556752 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556762 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556771 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556810 4981 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556819 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556828 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556837 4981 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556847 4981 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556859 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556872 4981 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556881 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556892 4981 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556903 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556914 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556924 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556946 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556958 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556977 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556996 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557007 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557018 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557032 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557227 4981 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557242 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557254 4981 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557266 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557278 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557299 4981 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557310 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557363 4981 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557376 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557385 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557400 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557412 4981 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557447 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557460 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557471 4981 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557483 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557495 4981 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557506 4981 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557526 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557541 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557556 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556282 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556605 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556706 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.556864 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557031 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557051 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557014 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.557540 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.555569 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.558217 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.558314 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.558502 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.558747 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.558764 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.558805 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.559144 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.559380 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.559420 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.559613 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.559778 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.559979 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.560021 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.560109 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.560182 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.560549 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.560914 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.561001 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.561607 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.563064 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.562968 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.563347 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.563376 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.563581 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.564121 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.564366 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.564476 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.564502 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.564597 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.564664 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.564790 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.550614 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.564886 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.565107 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.565319 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.565506 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.565514 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.565683 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.565884 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.565894 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.565898 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:30.065869734 +0000 UTC m=+21.518027985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.566293 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.566401 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:30.066347386 +0000 UTC m=+21.518505637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.566446 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.566772 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.566929 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.567296 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.567582 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.567828 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.568065 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.568951 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.569738 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.571327 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.572148 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.574561 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.574906 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.575642 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.575695 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.575841 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.576919 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.577664 4981 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.577790 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.577864 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.577952 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.578616 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.578705 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.578728 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.578793 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.578895 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:30.078837735 +0000 UTC m=+21.530995976 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.578888 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.580738 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.580784 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.580911 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.580969 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:29 crc kubenswrapper[4981]: E0128 15:03:29.581061 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:30.08104512 +0000 UTC m=+21.533203581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.581286 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.581559 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.581750 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.582358 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.582377 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.582882 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.583071 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.586919 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.587458 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.587934 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.588699 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.589676 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.590017 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.594232 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.596628 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.596957 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.597252 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.597304 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.597475 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.597553 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.597648 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.597827 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.598351 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.598538 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.598578 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.598693 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.598926 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.599288 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.604045 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.607545 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.607613 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.607837 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.608767 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.608951 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.610684 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.610810 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.618817 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.658340 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.658711 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.658795 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.658486 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.658977 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.659053 4981 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.659133 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.659226 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.659324 4981 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.659407 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.659475 4981 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.659552 4981 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.659637 4981 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.659749 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.659820 4981 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.659911 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.659988 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.660060 4981 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.660132 4981 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.660234 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.660323 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.660392 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.660472 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.660550 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.660991 4981 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.661108 4981 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.661225 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.661314 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.661392 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.661492 4981 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.661608 4981 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.661697 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.661776 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.661849 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.661925 4981 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.661994 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.662058 4981 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.662156 4981 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.662254 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.662331 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.662499 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.662576 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.662644 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.662718 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.662788 4981 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.662866 4981 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.662933 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.663006 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.663077 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.663203 4981 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.663291 4981 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.663360 4981 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.663432 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.663486 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.663544 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.663596 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.663673 4981 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.663756 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.663830 4981 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.663910 4981 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.663987 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.664061 4981 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.664178 4981 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.664282 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.664347 4981 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.664409 4981 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.664475 4981 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.664544 4981 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.664616 4981 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.664692 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.664775 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.664863 4981 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.664952 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.665035 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.665114 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.665353 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.665450 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.665564 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.665640 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.665720 4981 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.665797 4981 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.665879 4981 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.665959 4981 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.666099 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.666176 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.666282 4981 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.666397 4981 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.666520 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.666649 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.666740 4981 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.666841 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.666955 4981 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.667027 4981 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.667101 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.667174 4981 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.667278 4981 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.667368 4981 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.667439 4981 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.667513 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.667603 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.676898 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.685629 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.693069 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.742170 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.759741 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.766236 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.768390 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.768414 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: I0128 15:03:29.768427 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:03:29 crc kubenswrapper[4981]: W0128 15:03:29.784274 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-05e80ad6de526ff8add1c4e1263e95c447528d097af2ea83f667ad112a261700 WatchSource:0}: Error finding container 05e80ad6de526ff8add1c4e1263e95c447528d097af2ea83f667ad112a261700: Status 404 returned error can't find the container with id 05e80ad6de526ff8add1c4e1263e95c447528d097af2ea83f667ad112a261700 Jan 28 15:03:29 crc kubenswrapper[4981]: W0128 15:03:29.789605 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-9dcc37f20670ea416e31c0aef6ca66d30847e6fcfcf519d9c4524ad343c4e68e WatchSource:0}: Error finding container 9dcc37f20670ea416e31c0aef6ca66d30847e6fcfcf519d9c4524ad343c4e68e: Status 404 returned error can't find the container with id 9dcc37f20670ea416e31c0aef6ca66d30847e6fcfcf519d9c4524ad343c4e68e Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.070220 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.070344 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.070382 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:30 crc kubenswrapper[4981]: E0128 15:03:30.070402 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:31.070377727 +0000 UTC m=+22.522535968 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:30 crc kubenswrapper[4981]: E0128 15:03:30.070466 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:03:30 crc kubenswrapper[4981]: E0128 15:03:30.070508 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:03:30 crc kubenswrapper[4981]: E0128 15:03:30.070511 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:31.07050128 +0000 UTC m=+22.522659521 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:03:30 crc kubenswrapper[4981]: E0128 15:03:30.070558 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:31.070551231 +0000 UTC m=+22.522709472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.171776 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.171905 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:30 crc kubenswrapper[4981]: E0128 15:03:30.172126 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:03:30 crc kubenswrapper[4981]: E0128 15:03:30.172219 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:03:30 crc kubenswrapper[4981]: E0128 15:03:30.172221 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:03:30 crc kubenswrapper[4981]: E0128 15:03:30.172243 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:30 crc kubenswrapper[4981]: E0128 15:03:30.172266 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:03:30 crc kubenswrapper[4981]: E0128 15:03:30.172287 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:30 crc kubenswrapper[4981]: E0128 15:03:30.172352 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:31.172326841 +0000 UTC m=+22.624485362 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:30 crc kubenswrapper[4981]: E0128 15:03:30.172379 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:31.172368162 +0000 UTC m=+22.624526413 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.257173 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 20:57:38.654674284 +0000 UTC Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.481063 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba"} Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.481145 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9dcc37f20670ea416e31c0aef6ca66d30847e6fcfcf519d9c4524ad343c4e68e"} Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.482895 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"05e80ad6de526ff8add1c4e1263e95c447528d097af2ea83f667ad112a261700"} Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.484926 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58"} Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.484972 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85"} Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.484997 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"16e9d12aad7f05a0f3c34b98cbee7f0fb530520a34b204c9d150a4df0f8f42a7"} Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.486884 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.487477 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.489539 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915" exitCode=255 Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.489610 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915"} Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.489706 4981 scope.go:117] "RemoveContainer" containerID="83688abeeebdcce1bb47537b185199fb1628190c3a9cafb7fc188463d50dd92f" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.501049 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.506461 4981 scope.go:117] "RemoveContainer" containerID="f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.506482 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:03:30 crc kubenswrapper[4981]: E0128 15:03:30.506697 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.516975 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.533692 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.558390 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.579370 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.594917 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.610620 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.624054 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.644733 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://83688abeeebdcce1bb47537b185199fb1628190c3a9cafb7fc188463d50dd92f\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:23Z\\\",\\\"message\\\":\\\"W0128 15:03:12.571134 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 15:03:12.571586 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769612592 cert, and key in /tmp/serving-cert-2444392219/serving-signer.crt, /tmp/serving-cert-2444392219/serving-signer.key\\\\nI0128 15:03:12.811371 1 observer_polling.go:159] Starting file observer\\\\nW0128 15:03:12.815862 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 15:03:12.816107 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 15:03:12.816930 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2444392219/tls.crt::/tmp/serving-cert-2444392219/tls.key\\\\\\\"\\\\nF0128 15:03:22.997512 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.691110 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.712375 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.725359 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:30 crc kubenswrapper[4981]: I0128 15:03:30.737313 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:30Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.079491 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.079691 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:33.079660468 +0000 UTC m=+24.531818749 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.080449 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.080644 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.080718 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:33.080702284 +0000 UTC m=+24.532860565 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.080654 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.080922 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.081267 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:33.081235597 +0000 UTC m=+24.533393878 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.181716 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.181775 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.181920 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.181939 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.181955 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.182036 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:33.181992302 +0000 UTC m=+24.634150553 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.182071 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.182142 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.182161 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.182286 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:33.182258348 +0000 UTC m=+24.634416789 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.258121 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 00:02:57.152487151 +0000 UTC Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.317797 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.317805 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.318027 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.318005 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.318356 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.318407 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.326546 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.327784 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.330388 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.331812 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.332520 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.333088 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.333763 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.335292 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.336353 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.337338 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.337915 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.339267 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.339768 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.340754 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.341286 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.342255 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.342854 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.343277 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.344453 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.345180 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.345740 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.347152 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.348391 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.351078 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.352254 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.353969 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.356928 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.358157 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.360871 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.362028 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.364091 4981 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.364514 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.369009 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.371499 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.372294 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.374152 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.375071 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.375781 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.376671 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.377660 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.380443 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.382117 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.383792 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.386184 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.387427 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.388945 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.390591 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.393217 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.395012 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.397607 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.398856 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.400257 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.403027 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.404077 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.495390 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.499088 4981 scope.go:117] "RemoveContainer" containerID="f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915" Jan 28 15:03:31 crc kubenswrapper[4981]: E0128 15:03:31.499311 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.521098 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.538223 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.560750 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.589111 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.628546 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.657386 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.676199 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:31Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:31 crc kubenswrapper[4981]: I0128 15:03:31.893675 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:32 crc kubenswrapper[4981]: I0128 15:03:32.258942 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:15:15.125600806 +0000 UTC Jan 28 15:03:32 crc kubenswrapper[4981]: I0128 15:03:32.503904 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678"} Jan 28 15:03:32 crc kubenswrapper[4981]: I0128 15:03:32.504851 4981 scope.go:117] "RemoveContainer" containerID="f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915" Jan 28 15:03:32 crc kubenswrapper[4981]: E0128 15:03:32.505183 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 15:03:32 crc kubenswrapper[4981]: I0128 15:03:32.527924 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:32 crc kubenswrapper[4981]: I0128 15:03:32.547619 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:32 crc kubenswrapper[4981]: I0128 15:03:32.568573 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:32 crc kubenswrapper[4981]: I0128 15:03:32.590601 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:32 crc kubenswrapper[4981]: I0128 15:03:32.608510 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:32 crc kubenswrapper[4981]: I0128 15:03:32.627543 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:32 crc kubenswrapper[4981]: I0128 15:03:32.647157 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:32Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:33 crc kubenswrapper[4981]: I0128 15:03:33.099599 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:33 crc kubenswrapper[4981]: I0128 15:03:33.099782 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.099837 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.099796978 +0000 UTC m=+28.551955249 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.099951 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:03:33 crc kubenswrapper[4981]: I0128 15:03:33.100012 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.100035 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.100011663 +0000 UTC m=+28.552169924 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.100250 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.100404 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.100370992 +0000 UTC m=+28.552529433 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:03:33 crc kubenswrapper[4981]: I0128 15:03:33.201640 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:33 crc kubenswrapper[4981]: I0128 15:03:33.201713 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.201867 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.201888 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.201902 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.201969 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.201952187 +0000 UTC m=+28.654110428 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.201992 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.202056 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.202079 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.202181 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:37.202150602 +0000 UTC m=+28.654308883 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:33 crc kubenswrapper[4981]: I0128 15:03:33.259541 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 22:05:15.252075889 +0000 UTC Jan 28 15:03:33 crc kubenswrapper[4981]: I0128 15:03:33.318803 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.318951 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:33 crc kubenswrapper[4981]: I0128 15:03:33.318978 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.319119 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:33 crc kubenswrapper[4981]: I0128 15:03:33.318827 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:33 crc kubenswrapper[4981]: E0128 15:03:33.319232 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.104078 4981 csr.go:261] certificate signing request csr-2zj6j is approved, waiting to be issued Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.145955 4981 csr.go:257] certificate signing request csr-2zj6j is issued Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.227765 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-kfmjv"] Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.228156 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-dp2b6"] Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.228178 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kfmjv" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.228593 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dp2b6" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.230145 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.230679 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.230711 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.230766 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.231014 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.231883 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.233100 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.242914 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.260558 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 12:38:51.183912785 +0000 UTC Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.261640 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.276021 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.292355 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.309555 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.310047 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/072268dc-a2f0-47ef-86ae-1e7504b832b5-serviceca\") pod \"node-ca-kfmjv\" (UID: \"072268dc-a2f0-47ef-86ae-1e7504b832b5\") " pod="openshift-image-registry/node-ca-kfmjv" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.310117 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdtjc\" (UniqueName: \"kubernetes.io/projected/ff8ae630-1ed6-4dd3-97b6-f93e12901e6a-kube-api-access-tdtjc\") pod \"node-resolver-dp2b6\" (UID: \"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\") " pod="openshift-dns/node-resolver-dp2b6" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.310157 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/072268dc-a2f0-47ef-86ae-1e7504b832b5-host\") pod \"node-ca-kfmjv\" (UID: \"072268dc-a2f0-47ef-86ae-1e7504b832b5\") " pod="openshift-image-registry/node-ca-kfmjv" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.310180 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ff8ae630-1ed6-4dd3-97b6-f93e12901e6a-hosts-file\") pod \"node-resolver-dp2b6\" (UID: \"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\") " pod="openshift-dns/node-resolver-dp2b6" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.310221 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhbhq\" (UniqueName: \"kubernetes.io/projected/072268dc-a2f0-47ef-86ae-1e7504b832b5-kube-api-access-jhbhq\") pod \"node-ca-kfmjv\" (UID: \"072268dc-a2f0-47ef-86ae-1e7504b832b5\") " pod="openshift-image-registry/node-ca-kfmjv" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.337155 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.374454 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.410967 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdtjc\" (UniqueName: \"kubernetes.io/projected/ff8ae630-1ed6-4dd3-97b6-f93e12901e6a-kube-api-access-tdtjc\") pod \"node-resolver-dp2b6\" (UID: \"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\") " pod="openshift-dns/node-resolver-dp2b6" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.411003 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/072268dc-a2f0-47ef-86ae-1e7504b832b5-host\") pod \"node-ca-kfmjv\" (UID: \"072268dc-a2f0-47ef-86ae-1e7504b832b5\") " pod="openshift-image-registry/node-ca-kfmjv" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.411025 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ff8ae630-1ed6-4dd3-97b6-f93e12901e6a-hosts-file\") pod \"node-resolver-dp2b6\" (UID: \"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\") " pod="openshift-dns/node-resolver-dp2b6" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.411040 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhbhq\" (UniqueName: \"kubernetes.io/projected/072268dc-a2f0-47ef-86ae-1e7504b832b5-kube-api-access-jhbhq\") pod \"node-ca-kfmjv\" (UID: \"072268dc-a2f0-47ef-86ae-1e7504b832b5\") " pod="openshift-image-registry/node-ca-kfmjv" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.411057 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/072268dc-a2f0-47ef-86ae-1e7504b832b5-serviceca\") pod \"node-ca-kfmjv\" (UID: \"072268dc-a2f0-47ef-86ae-1e7504b832b5\") " pod="openshift-image-registry/node-ca-kfmjv" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.411937 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/072268dc-a2f0-47ef-86ae-1e7504b832b5-serviceca\") pod \"node-ca-kfmjv\" (UID: \"072268dc-a2f0-47ef-86ae-1e7504b832b5\") " pod="openshift-image-registry/node-ca-kfmjv" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.412091 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.412393 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/072268dc-a2f0-47ef-86ae-1e7504b832b5-host\") pod \"node-ca-kfmjv\" (UID: \"072268dc-a2f0-47ef-86ae-1e7504b832b5\") " pod="openshift-image-registry/node-ca-kfmjv" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.412439 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ff8ae630-1ed6-4dd3-97b6-f93e12901e6a-hosts-file\") pod \"node-resolver-dp2b6\" (UID: \"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\") " pod="openshift-dns/node-resolver-dp2b6" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.436842 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.444258 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhbhq\" (UniqueName: \"kubernetes.io/projected/072268dc-a2f0-47ef-86ae-1e7504b832b5-kube-api-access-jhbhq\") pod \"node-ca-kfmjv\" (UID: \"072268dc-a2f0-47ef-86ae-1e7504b832b5\") " pod="openshift-image-registry/node-ca-kfmjv" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.451669 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdtjc\" (UniqueName: \"kubernetes.io/projected/ff8ae630-1ed6-4dd3-97b6-f93e12901e6a-kube-api-access-tdtjc\") pod \"node-resolver-dp2b6\" (UID: \"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\") " pod="openshift-dns/node-resolver-dp2b6" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.454007 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.471110 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.484147 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.494949 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.511758 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.533489 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.542342 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kfmjv" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.546496 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-dp2b6" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.548852 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: W0128 15:03:34.557724 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod072268dc_a2f0_47ef_86ae_1e7504b832b5.slice/crio-f7b6a226f766e42da6a695136e10916ddf172755b8bdfe65e80ded21ea683450 WatchSource:0}: Error finding container f7b6a226f766e42da6a695136e10916ddf172755b8bdfe65e80ded21ea683450: Status 404 returned error can't find the container with id f7b6a226f766e42da6a695136e10916ddf172755b8bdfe65e80ded21ea683450 Jan 28 15:03:34 crc kubenswrapper[4981]: W0128 15:03:34.559097 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff8ae630_1ed6_4dd3_97b6_f93e12901e6a.slice/crio-bee7a1053fa875513920ff2b747b6a0c3915a5ff9873fd7c13e77a1afea3637d WatchSource:0}: Error finding container bee7a1053fa875513920ff2b747b6a0c3915a5ff9873fd7c13e77a1afea3637d: Status 404 returned error can't find the container with id bee7a1053fa875513920ff2b747b6a0c3915a5ff9873fd7c13e77a1afea3637d Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.576534 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.596492 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.644856 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-rcgbx"] Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.645351 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.648167 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.648818 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.648960 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.649489 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.650249 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.676557 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.689050 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.700787 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.712803 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67525d77-715e-4ec3-bdbb-6854657355c0-mcd-auth-proxy-config\") pod \"machine-config-daemon-rcgbx\" (UID: \"67525d77-715e-4ec3-bdbb-6854657355c0\") " pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.712919 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67525d77-715e-4ec3-bdbb-6854657355c0-proxy-tls\") pod \"machine-config-daemon-rcgbx\" (UID: \"67525d77-715e-4ec3-bdbb-6854657355c0\") " pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.713371 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg6bb\" (UniqueName: \"kubernetes.io/projected/67525d77-715e-4ec3-bdbb-6854657355c0-kube-api-access-gg6bb\") pod \"machine-config-daemon-rcgbx\" (UID: \"67525d77-715e-4ec3-bdbb-6854657355c0\") " pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.713418 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/67525d77-715e-4ec3-bdbb-6854657355c0-rootfs\") pod \"machine-config-daemon-rcgbx\" (UID: \"67525d77-715e-4ec3-bdbb-6854657355c0\") " pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.714998 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.723145 4981 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.727035 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.727070 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.727080 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.727146 4981 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.730873 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.737774 4981 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.738123 4981 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.741592 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.741644 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.741656 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.741678 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.741716 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:34Z","lastTransitionTime":"2026-01-28T15:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.754161 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: E0128 15:03:34.761454 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.770822 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.770906 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.770924 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.771163 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.771179 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:34Z","lastTransitionTime":"2026-01-28T15:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.781938 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: E0128 15:03:34.796231 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.797422 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.801654 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.801708 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.801722 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.801748 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.801760 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:34Z","lastTransitionTime":"2026-01-28T15:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.814912 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67525d77-715e-4ec3-bdbb-6854657355c0-mcd-auth-proxy-config\") pod \"machine-config-daemon-rcgbx\" (UID: \"67525d77-715e-4ec3-bdbb-6854657355c0\") " pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.814964 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67525d77-715e-4ec3-bdbb-6854657355c0-proxy-tls\") pod \"machine-config-daemon-rcgbx\" (UID: \"67525d77-715e-4ec3-bdbb-6854657355c0\") " pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.814994 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg6bb\" (UniqueName: \"kubernetes.io/projected/67525d77-715e-4ec3-bdbb-6854657355c0-kube-api-access-gg6bb\") pod \"machine-config-daemon-rcgbx\" (UID: \"67525d77-715e-4ec3-bdbb-6854657355c0\") " pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.815024 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/67525d77-715e-4ec3-bdbb-6854657355c0-rootfs\") pod \"machine-config-daemon-rcgbx\" (UID: \"67525d77-715e-4ec3-bdbb-6854657355c0\") " pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.815085 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/67525d77-715e-4ec3-bdbb-6854657355c0-rootfs\") pod \"machine-config-daemon-rcgbx\" (UID: \"67525d77-715e-4ec3-bdbb-6854657355c0\") " pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.815866 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/67525d77-715e-4ec3-bdbb-6854657355c0-mcd-auth-proxy-config\") pod \"machine-config-daemon-rcgbx\" (UID: \"67525d77-715e-4ec3-bdbb-6854657355c0\") " pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:03:34 crc kubenswrapper[4981]: E0128 15:03:34.816079 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.816356 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.818804 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/67525d77-715e-4ec3-bdbb-6854657355c0-proxy-tls\") pod \"machine-config-daemon-rcgbx\" (UID: \"67525d77-715e-4ec3-bdbb-6854657355c0\") " pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.824925 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.825147 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.825245 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.825332 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.825392 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:34Z","lastTransitionTime":"2026-01-28T15:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.841908 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg6bb\" (UniqueName: \"kubernetes.io/projected/67525d77-715e-4ec3-bdbb-6854657355c0-kube-api-access-gg6bb\") pod \"machine-config-daemon-rcgbx\" (UID: \"67525d77-715e-4ec3-bdbb-6854657355c0\") " pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.842409 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: E0128 15:03:34.847011 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.853851 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.853899 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.853912 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.853928 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.853940 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:34Z","lastTransitionTime":"2026-01-28T15:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:34 crc kubenswrapper[4981]: E0128 15:03:34.867359 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:34Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:34 crc kubenswrapper[4981]: E0128 15:03:34.867543 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.869154 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.869207 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.869222 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.869242 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.869258 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:34Z","lastTransitionTime":"2026-01-28T15:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.965065 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.972078 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.972113 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.972126 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.972144 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:34 crc kubenswrapper[4981]: I0128 15:03:34.972157 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:34Z","lastTransitionTime":"2026-01-28T15:03:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:34 crc kubenswrapper[4981]: W0128 15:03:34.976371 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67525d77_715e_4ec3_bdbb_6854657355c0.slice/crio-e8309f895c6e0097050912acf6f2dcca045865b2b2ed18da359ad11d888ddc0f WatchSource:0}: Error finding container e8309f895c6e0097050912acf6f2dcca045865b2b2ed18da359ad11d888ddc0f: Status 404 returned error can't find the container with id e8309f895c6e0097050912acf6f2dcca045865b2b2ed18da359ad11d888ddc0f Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.033835 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.045207 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.058857 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lwvh4"] Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.059158 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.072071 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-4dgt8"] Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.073073 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.074715 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.075235 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.075368 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.075556 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.075621 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2ss7x"] Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.077444 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.077778 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.077814 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.078043 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.078072 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.078117 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.078135 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.078158 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.078169 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:35Z","lastTransitionTime":"2026-01-28T15:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.082266 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.082288 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.082368 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.082507 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.082609 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.083584 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.084549 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.087709 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.103687 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.117649 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-run-netns\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.117689 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-run-multus-certs\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.117709 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-cni-netd\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.117729 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cbdbd481-8604-433f-823e-d77a8b8517a8-ovn-node-metrics-cert\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.117747 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-os-release\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.117762 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.117778 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-slash\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.117795 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-multus-socket-dir-parent\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.117808 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-systemd-units\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.117821 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-systemd\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.117836 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-878rf\" (UniqueName: \"kubernetes.io/projected/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-kube-api-access-878rf\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.117854 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-kubelet\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.117940 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-multus-conf-dir\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.117992 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-multus-daemon-config\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.118029 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-system-cni-dir\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119162 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-var-lib-openvswitch\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119251 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-log-socket\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119289 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-ovnkube-script-lib\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119316 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119341 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-cni-bin\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119614 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-run-k8s-cni-cncf-io\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119656 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119681 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-multus-cni-dir\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119724 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-cni-binary-copy\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119743 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-etc-openvswitch\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119758 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-openvswitch\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119775 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-ovn\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119795 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-env-overrides\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119813 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-etc-kubernetes\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119833 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-hostroot\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119850 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-os-release\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119865 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-node-log\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119902 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-var-lib-cni-multus\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119921 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkzd2\" (UniqueName: \"kubernetes.io/projected/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-kube-api-access-wkzd2\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119945 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-system-cni-dir\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119962 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-cnibin\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119980 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-cnibin\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.119998 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fnr8\" (UniqueName: \"kubernetes.io/projected/cbdbd481-8604-433f-823e-d77a8b8517a8-kube-api-access-2fnr8\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.120020 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-var-lib-cni-bin\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.120047 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-ovnkube-config\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.120066 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-cni-binary-copy\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.120083 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-var-lib-kubelet\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.120102 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-run-netns\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.120120 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-run-ovn-kubernetes\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.122396 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.133516 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.145964 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.147173 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-28 14:58:34 +0000 UTC, rotation deadline is 2026-10-19 09:34:08.131876657 +0000 UTC Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.147275 4981 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6330h30m32.984605611s for next certificate rotation Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.149539 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.166377 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.180636 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.180674 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.180686 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.180703 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.180713 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:35Z","lastTransitionTime":"2026-01-28T15:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.185754 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.201014 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.219294 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.220748 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-cni-binary-copy\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.220796 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-var-lib-kubelet\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.220818 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-run-netns\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.220900 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-run-ovn-kubernetes\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.220925 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-run-netns\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.220935 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-var-lib-kubelet\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.220995 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-run-netns\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221048 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-run-netns\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221010 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-run-multus-certs\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221055 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-run-ovn-kubernetes\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.220950 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-run-multus-certs\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221155 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-cni-netd\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221209 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cbdbd481-8604-433f-823e-d77a8b8517a8-ovn-node-metrics-cert\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221233 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-os-release\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221253 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221278 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-slash\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221302 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-multus-socket-dir-parent\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221320 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-systemd-units\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221338 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-systemd\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221374 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-kubelet\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221411 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-multus-conf-dir\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221427 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-multus-daemon-config\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221519 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-system-cni-dir\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221537 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-878rf\" (UniqueName: \"kubernetes.io/projected/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-kube-api-access-878rf\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221563 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-cni-binary-copy\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221588 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-cni-netd\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221630 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-kubelet\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221572 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-var-lib-openvswitch\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221662 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-multus-conf-dir\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221594 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-var-lib-openvswitch\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221671 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-log-socket\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221718 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-ovnkube-script-lib\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221735 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-cni-bin\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221759 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-run-k8s-cni-cncf-io\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221675 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-os-release\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221774 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221793 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221812 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-multus-cni-dir\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221827 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-cni-binary-copy\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221842 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-systemd-units\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221842 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-etc-openvswitch\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221871 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-multus-socket-dir-parent\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221900 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-openvswitch\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221911 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-slash\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221922 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-system-cni-dir\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221922 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-ovn\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221946 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-ovn\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221962 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-env-overrides\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221983 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-etc-kubernetes\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222011 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-hostroot\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222028 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-os-release\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222046 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-node-log\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222084 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-var-lib-cni-multus\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222102 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkzd2\" (UniqueName: \"kubernetes.io/projected/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-kube-api-access-wkzd2\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222123 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-system-cni-dir\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222141 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-cnibin\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222158 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-cnibin\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222204 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fnr8\" (UniqueName: \"kubernetes.io/projected/cbdbd481-8604-433f-823e-d77a8b8517a8-kube-api-access-2fnr8\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222236 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-var-lib-cni-bin\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222260 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-ovnkube-config\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222270 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-multus-daemon-config\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222285 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-var-lib-cni-multus\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221695 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-log-socket\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222342 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-cni-bin\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.221630 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-systemd\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222373 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-run-k8s-cni-cncf-io\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222399 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-hostroot\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222532 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-tuning-conf-dir\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222579 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-os-release\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222607 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-node-log\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222635 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-cnibin\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222803 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-ovnkube-script-lib\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222845 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222975 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-system-cni-dir\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.222983 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.223029 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-multus-cni-dir\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.223041 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-cnibin\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.223071 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-etc-openvswitch\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.223103 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-openvswitch\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.223132 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-host-var-lib-cni-bin\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.223149 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-etc-kubernetes\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.223492 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-cni-binary-copy\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.225141 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-env-overrides\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.225504 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-ovnkube-config\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.225697 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cbdbd481-8604-433f-823e-d77a8b8517a8-ovn-node-metrics-cert\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.237245 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.243378 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fnr8\" (UniqueName: \"kubernetes.io/projected/cbdbd481-8604-433f-823e-d77a8b8517a8-kube-api-access-2fnr8\") pod \"ovnkube-node-2ss7x\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.246525 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-878rf\" (UniqueName: \"kubernetes.io/projected/76561bd4-81e0-4978-ac44-fb6bf5f60c7d-kube-api-access-878rf\") pod \"multus-additional-cni-plugins-4dgt8\" (UID: \"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\") " pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.246716 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkzd2\" (UniqueName: \"kubernetes.io/projected/3cd6b29e-682c-4aec-b039-70d6d75cbcbc-kube-api-access-wkzd2\") pod \"multus-lwvh4\" (UID: \"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\") " pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.252860 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.261674 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:42:16.263601553 +0000 UTC Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.279547 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.283801 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.283841 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.283850 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.283866 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.283878 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:35Z","lastTransitionTime":"2026-01-28T15:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.300232 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.316149 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.318396 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.318484 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:35 crc kubenswrapper[4981]: E0128 15:03:35.318528 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.318562 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:35 crc kubenswrapper[4981]: E0128 15:03:35.318688 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:35 crc kubenswrapper[4981]: E0128 15:03:35.318831 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.328611 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.346245 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.371546 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.384652 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lwvh4" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.390130 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.390244 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.392250 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.392281 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.392294 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.392310 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.392321 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:35Z","lastTransitionTime":"2026-01-28T15:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.396154 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:35 crc kubenswrapper[4981]: W0128 15:03:35.403867 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cd6b29e_682c_4aec_b039_70d6d75cbcbc.slice/crio-ae9ada21d267012771f21352aab1e48f1c929be43309d7a2be122f9d101daa09 WatchSource:0}: Error finding container ae9ada21d267012771f21352aab1e48f1c929be43309d7a2be122f9d101daa09: Status 404 returned error can't find the container with id ae9ada21d267012771f21352aab1e48f1c929be43309d7a2be122f9d101daa09 Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.406988 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: W0128 15:03:35.416476 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbdbd481_8604_433f_823e_d77a8b8517a8.slice/crio-7a2ef33b9f1aa730ca2c7976800044be34eb3600e9b7b5e3edb61c768fb34bbc WatchSource:0}: Error finding container 7a2ef33b9f1aa730ca2c7976800044be34eb3600e9b7b5e3edb61c768fb34bbc: Status 404 returned error can't find the container with id 7a2ef33b9f1aa730ca2c7976800044be34eb3600e9b7b5e3edb61c768fb34bbc Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.420655 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.437526 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.455017 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.471927 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.484308 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.495513 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.495615 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.495651 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.495675 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.495688 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:35Z","lastTransitionTime":"2026-01-28T15:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.516080 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.516144 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.516158 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"e8309f895c6e0097050912acf6f2dcca045865b2b2ed18da359ad11d888ddc0f"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.518991 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kfmjv" event={"ID":"072268dc-a2f0-47ef-86ae-1e7504b832b5","Type":"ContainerStarted","Data":"a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.519038 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kfmjv" event={"ID":"072268dc-a2f0-47ef-86ae-1e7504b832b5","Type":"ContainerStarted","Data":"f7b6a226f766e42da6a695136e10916ddf172755b8bdfe65e80ded21ea683450"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.521523 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dp2b6" event={"ID":"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a","Type":"ContainerStarted","Data":"27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.521555 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-dp2b6" event={"ID":"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a","Type":"ContainerStarted","Data":"bee7a1053fa875513920ff2b747b6a0c3915a5ff9873fd7c13e77a1afea3637d"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.523640 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerStarted","Data":"7a2ef33b9f1aa730ca2c7976800044be34eb3600e9b7b5e3edb61c768fb34bbc"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.525296 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" event={"ID":"76561bd4-81e0-4978-ac44-fb6bf5f60c7d","Type":"ContainerStarted","Data":"dfa422da2555e09393882d2fe8db7dc06c6c55150d26fef1eee304c8dcbe03c2"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.526561 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lwvh4" event={"ID":"3cd6b29e-682c-4aec-b039-70d6d75cbcbc","Type":"ContainerStarted","Data":"ae9ada21d267012771f21352aab1e48f1c929be43309d7a2be122f9d101daa09"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.531890 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.545594 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.564791 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.578372 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.591551 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.597996 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.598152 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.598276 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.598348 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.598412 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:35Z","lastTransitionTime":"2026-01-28T15:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.604377 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.618014 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.630645 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.648398 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.664775 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.677944 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.689398 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.701273 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.701462 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.701558 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.701641 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.701741 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:35Z","lastTransitionTime":"2026-01-28T15:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.703133 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.722291 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.738239 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.751119 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.762860 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.780812 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.794601 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.804613 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.804666 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.804676 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.804691 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.804702 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:35Z","lastTransitionTime":"2026-01-28T15:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.823516 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.874230 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.907042 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.907304 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.907385 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.907448 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.907501 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:35Z","lastTransitionTime":"2026-01-28T15:03:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.921246 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.952730 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:35 crc kubenswrapper[4981]: I0128 15:03:35.981164 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:35Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.010605 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.010658 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.010672 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.010692 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.010708 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:36Z","lastTransitionTime":"2026-01-28T15:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.020823 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.060174 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.099810 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.113498 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.113539 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.113548 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.113563 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.113571 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:36Z","lastTransitionTime":"2026-01-28T15:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.139818 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.216684 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.217113 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.217127 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.217150 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.217163 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:36Z","lastTransitionTime":"2026-01-28T15:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.262572 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 14:06:25.616886587 +0000 UTC Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.319432 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.319509 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.319522 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.319538 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.319549 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:36Z","lastTransitionTime":"2026-01-28T15:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.421750 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.421784 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.421796 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.421812 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.421823 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:36Z","lastTransitionTime":"2026-01-28T15:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.524862 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.525163 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.525264 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.525364 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.525466 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:36Z","lastTransitionTime":"2026-01-28T15:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.531497 4981 generic.go:334] "Generic (PLEG): container finished" podID="76561bd4-81e0-4978-ac44-fb6bf5f60c7d" containerID="5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272" exitCode=0 Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.531573 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" event={"ID":"76561bd4-81e0-4978-ac44-fb6bf5f60c7d","Type":"ContainerDied","Data":"5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272"} Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.534315 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lwvh4" event={"ID":"3cd6b29e-682c-4aec-b039-70d6d75cbcbc","Type":"ContainerStarted","Data":"1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c"} Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.535779 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerID="832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c" exitCode=0 Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.535841 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerDied","Data":"832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c"} Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.546933 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.558768 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.584856 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.602849 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.620159 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.628978 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.629026 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.629036 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.629063 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.629080 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:36Z","lastTransitionTime":"2026-01-28T15:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.635055 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.649480 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.665680 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.682616 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.696881 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.721626 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.733939 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.781823 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.781887 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.781898 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.781919 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.781932 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:36Z","lastTransitionTime":"2026-01-28T15:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.791703 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.814309 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.833647 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.845537 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.862150 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.885127 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.885156 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.885165 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.885179 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.885199 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:36Z","lastTransitionTime":"2026-01-28T15:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.890722 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.905309 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.947035 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.983276 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.987472 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.987514 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.987524 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.987542 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:36 crc kubenswrapper[4981]: I0128 15:03:36.987551 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:36Z","lastTransitionTime":"2026-01-28T15:03:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.018628 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.063935 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.090224 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.090285 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.090304 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.090329 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.090345 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:37Z","lastTransitionTime":"2026-01-28T15:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.101084 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.146806 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.180854 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.187479 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.187592 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.187621 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.187726 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.187766 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.18771918 +0000 UTC m=+36.639877471 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.187827 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.187813992 +0000 UTC m=+36.639972453 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.187829 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.187925 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.187905784 +0000 UTC m=+36.640064025 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.192440 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.192493 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.192508 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.192532 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.192545 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:37Z","lastTransitionTime":"2026-01-28T15:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.224358 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.259550 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.263491 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 05:04:07.11288509 +0000 UTC Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.289239 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.289294 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.289462 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.289482 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.289495 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.289493 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.289531 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.289545 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.289557 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.289540391 +0000 UTC m=+36.741698642 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.289605 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:45.289587942 +0000 UTC m=+36.741746183 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.296003 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.296047 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.296059 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.296075 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.296087 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:37Z","lastTransitionTime":"2026-01-28T15:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.318443 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.318633 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.319130 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.319250 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.319325 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:37 crc kubenswrapper[4981]: E0128 15:03:37.319396 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.399551 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.399602 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.399612 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.399630 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.399639 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:37Z","lastTransitionTime":"2026-01-28T15:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.502382 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.502432 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.502443 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.502461 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.502473 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:37Z","lastTransitionTime":"2026-01-28T15:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.545421 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerStarted","Data":"323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.545481 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerStarted","Data":"fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.545498 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerStarted","Data":"4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.545511 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerStarted","Data":"e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.545522 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerStarted","Data":"dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.545534 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerStarted","Data":"646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.547921 4981 generic.go:334] "Generic (PLEG): container finished" podID="76561bd4-81e0-4978-ac44-fb6bf5f60c7d" containerID="85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988" exitCode=0 Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.548016 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" event={"ID":"76561bd4-81e0-4978-ac44-fb6bf5f60c7d","Type":"ContainerDied","Data":"85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.574709 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.590527 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.603642 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.605245 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.605283 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.605293 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.605310 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.605319 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:37Z","lastTransitionTime":"2026-01-28T15:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.615455 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.632787 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.645884 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.658121 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.672205 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.687571 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.703297 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.710754 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.710812 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.710828 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.710847 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.710860 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:37Z","lastTransitionTime":"2026-01-28T15:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.720480 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.741059 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.782452 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.813246 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.813286 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.813298 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.813313 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.813327 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:37Z","lastTransitionTime":"2026-01-28T15:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.826048 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.916203 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.916248 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.916258 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.916272 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:37 crc kubenswrapper[4981]: I0128 15:03:37.916282 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:37Z","lastTransitionTime":"2026-01-28T15:03:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.025620 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.025667 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.025677 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.025693 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.025703 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:38Z","lastTransitionTime":"2026-01-28T15:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.129475 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.129524 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.129541 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.129560 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.129574 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:38Z","lastTransitionTime":"2026-01-28T15:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.232589 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.232662 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.232687 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.232716 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.232736 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:38Z","lastTransitionTime":"2026-01-28T15:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.263673 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 02:59:30.639422069 +0000 UTC Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.335782 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.335820 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.335831 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.335849 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.335858 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:38Z","lastTransitionTime":"2026-01-28T15:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.438784 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.438994 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.439010 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.439040 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.439057 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:38Z","lastTransitionTime":"2026-01-28T15:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.542130 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.542167 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.542175 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.542206 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.542217 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:38Z","lastTransitionTime":"2026-01-28T15:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.552238 4981 generic.go:334] "Generic (PLEG): container finished" podID="76561bd4-81e0-4978-ac44-fb6bf5f60c7d" containerID="ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736" exitCode=0 Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.552276 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" event={"ID":"76561bd4-81e0-4978-ac44-fb6bf5f60c7d","Type":"ContainerDied","Data":"ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736"} Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.571388 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.587089 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.603524 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.617086 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.628110 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.645636 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.645716 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.645744 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.645774 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.645798 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:38Z","lastTransitionTime":"2026-01-28T15:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.652682 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.667971 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.683339 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.699807 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.716415 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.731619 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.744583 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.748967 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.749018 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.749030 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.749051 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.749064 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:38Z","lastTransitionTime":"2026-01-28T15:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.757728 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.770880 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.851417 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.851485 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.851503 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.851526 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.851545 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:38Z","lastTransitionTime":"2026-01-28T15:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.954246 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.954532 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.954614 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.954688 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:38 crc kubenswrapper[4981]: I0128 15:03:38.954760 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:38Z","lastTransitionTime":"2026-01-28T15:03:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.058045 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.058110 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.058122 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.058139 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.058151 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:39Z","lastTransitionTime":"2026-01-28T15:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.065338 4981 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.162490 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.164874 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.165039 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.165239 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.165424 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:39Z","lastTransitionTime":"2026-01-28T15:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.265170 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:54:59.766454639 +0000 UTC Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.269759 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.269841 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.269861 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.269891 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.269906 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:39Z","lastTransitionTime":"2026-01-28T15:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.318073 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:39 crc kubenswrapper[4981]: E0128 15:03:39.318328 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.318821 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:39 crc kubenswrapper[4981]: E0128 15:03:39.318923 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.319007 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:39 crc kubenswrapper[4981]: E0128 15:03:39.319096 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.350165 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.372990 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.373055 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.373072 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.373095 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.373113 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:39Z","lastTransitionTime":"2026-01-28T15:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.373711 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.399382 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.419181 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.431431 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.444049 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.456906 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.470556 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.475362 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.475401 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.475414 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.475429 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.475440 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:39Z","lastTransitionTime":"2026-01-28T15:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.486425 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.503656 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.519159 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.530062 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.543083 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.561008 4981 generic.go:334] "Generic (PLEG): container finished" podID="76561bd4-81e0-4978-ac44-fb6bf5f60c7d" containerID="c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04" exitCode=0 Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.561096 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" event={"ID":"76561bd4-81e0-4978-ac44-fb6bf5f60c7d","Type":"ContainerDied","Data":"c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04"} Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.574179 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.583317 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.583366 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.583381 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.583402 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.583415 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:39Z","lastTransitionTime":"2026-01-28T15:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.601466 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.620757 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.637800 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.655443 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.685897 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.685942 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.685953 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.685969 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.685982 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:39Z","lastTransitionTime":"2026-01-28T15:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.691092 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.710003 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.725575 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.740616 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.756847 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.771426 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.782824 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.788591 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.788638 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.788651 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.788667 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.788676 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:39Z","lastTransitionTime":"2026-01-28T15:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.798464 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.811246 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.833463 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.891387 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.891437 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.891451 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.891468 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.891481 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:39Z","lastTransitionTime":"2026-01-28T15:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.994096 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.994129 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.994138 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.994152 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:39 crc kubenswrapper[4981]: I0128 15:03:39.994161 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:39Z","lastTransitionTime":"2026-01-28T15:03:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.097324 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.097400 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.097422 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.097448 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.097469 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:40Z","lastTransitionTime":"2026-01-28T15:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.201038 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.201110 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.201128 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.201157 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.201179 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:40Z","lastTransitionTime":"2026-01-28T15:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.265914 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 00:56:10.224869133 +0000 UTC Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.304070 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.304154 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.304178 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.304254 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.304280 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:40Z","lastTransitionTime":"2026-01-28T15:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.407476 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.407517 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.407530 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.407549 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.407564 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:40Z","lastTransitionTime":"2026-01-28T15:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.511033 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.511079 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.511098 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.511122 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.511139 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:40Z","lastTransitionTime":"2026-01-28T15:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.569808 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerStarted","Data":"99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9"} Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.573572 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" event={"ID":"76561bd4-81e0-4978-ac44-fb6bf5f60c7d","Type":"ContainerStarted","Data":"cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4"} Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.596637 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:40Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.614476 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.614543 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.614561 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.614584 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.614605 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:40Z","lastTransitionTime":"2026-01-28T15:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.615315 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:40Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.632347 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:40Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.652073 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:40Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.669133 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:40Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.696826 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:40Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.714832 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:40Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.717996 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.718082 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.718105 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.718137 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.718158 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:40Z","lastTransitionTime":"2026-01-28T15:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.729886 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:40Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.744689 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:40Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.766334 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:40Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.782824 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:40Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.798517 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:40Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.813005 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:40Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.820628 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.820675 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.820686 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.820707 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.820723 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:40Z","lastTransitionTime":"2026-01-28T15:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.832576 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:40Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.924125 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.924180 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.924217 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.924238 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:40 crc kubenswrapper[4981]: I0128 15:03:40.924250 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:40Z","lastTransitionTime":"2026-01-28T15:03:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.027831 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.027897 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.027914 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.027941 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.027959 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:41Z","lastTransitionTime":"2026-01-28T15:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.132141 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.132245 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.132271 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.132307 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.132333 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:41Z","lastTransitionTime":"2026-01-28T15:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.235409 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.235471 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.235486 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.235516 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.235533 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:41Z","lastTransitionTime":"2026-01-28T15:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.266614 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 13:20:44.781252375 +0000 UTC Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.318116 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.318157 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.318206 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:41 crc kubenswrapper[4981]: E0128 15:03:41.318355 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:41 crc kubenswrapper[4981]: E0128 15:03:41.318548 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:41 crc kubenswrapper[4981]: E0128 15:03:41.318712 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.338583 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.338645 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.338658 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.338681 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.338693 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:41Z","lastTransitionTime":"2026-01-28T15:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.441541 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.441583 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.441592 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.441607 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.441617 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:41Z","lastTransitionTime":"2026-01-28T15:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.544996 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.545062 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.545081 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.545108 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.545129 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:41Z","lastTransitionTime":"2026-01-28T15:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.647265 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.647321 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.647338 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.647358 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.647374 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:41Z","lastTransitionTime":"2026-01-28T15:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.749820 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.749874 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.749893 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.749917 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.749936 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:41Z","lastTransitionTime":"2026-01-28T15:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.852952 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.853013 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.853056 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.853083 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.853103 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:41Z","lastTransitionTime":"2026-01-28T15:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.956362 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.956424 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.956442 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.956465 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:41 crc kubenswrapper[4981]: I0128 15:03:41.956483 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:41Z","lastTransitionTime":"2026-01-28T15:03:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.059686 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.059784 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.059802 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.059859 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.059878 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:42Z","lastTransitionTime":"2026-01-28T15:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.166016 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.166085 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.166109 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.166141 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.166163 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:42Z","lastTransitionTime":"2026-01-28T15:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.267349 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:04:37.971035968 +0000 UTC Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.268814 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.268878 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.269269 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.269290 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.269328 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:42Z","lastTransitionTime":"2026-01-28T15:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.372015 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.372163 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.372179 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.372213 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.372226 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:42Z","lastTransitionTime":"2026-01-28T15:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.476176 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.476244 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.476261 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.476283 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.476298 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:42Z","lastTransitionTime":"2026-01-28T15:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.579803 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.579876 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.579901 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.579933 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.579957 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:42Z","lastTransitionTime":"2026-01-28T15:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.683362 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.683442 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.683461 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.683490 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.683511 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:42Z","lastTransitionTime":"2026-01-28T15:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.786701 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.786780 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.786799 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.786827 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.786850 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:42Z","lastTransitionTime":"2026-01-28T15:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.890525 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.890576 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.890593 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.890617 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.890636 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:42Z","lastTransitionTime":"2026-01-28T15:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.992411 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.992454 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.992466 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.992481 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:42 crc kubenswrapper[4981]: I0128 15:03:42.992492 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:42Z","lastTransitionTime":"2026-01-28T15:03:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.095409 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.095472 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.095483 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.095505 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.095519 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:43Z","lastTransitionTime":"2026-01-28T15:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.198623 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.198692 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.198704 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.198725 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.198738 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:43Z","lastTransitionTime":"2026-01-28T15:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.268506 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:39:00.713341381 +0000 UTC Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.302827 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.302898 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.302917 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.302954 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.302973 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:43Z","lastTransitionTime":"2026-01-28T15:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.319602 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:43 crc kubenswrapper[4981]: E0128 15:03:43.319812 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.320792 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.320812 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:43 crc kubenswrapper[4981]: E0128 15:03:43.320917 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:43 crc kubenswrapper[4981]: E0128 15:03:43.321574 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.322070 4981 scope.go:117] "RemoveContainer" containerID="f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.406638 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.406723 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.406743 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.406777 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.406800 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:43Z","lastTransitionTime":"2026-01-28T15:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.509100 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.509135 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.509144 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.509157 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.509168 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:43Z","lastTransitionTime":"2026-01-28T15:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.587815 4981 generic.go:334] "Generic (PLEG): container finished" podID="76561bd4-81e0-4978-ac44-fb6bf5f60c7d" containerID="cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4" exitCode=0 Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.587885 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" event={"ID":"76561bd4-81e0-4978-ac44-fb6bf5f60c7d","Type":"ContainerDied","Data":"cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4"} Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.596495 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerStarted","Data":"8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235"} Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.596984 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.597049 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.605304 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.627399 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.629170 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.629234 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.629249 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.629269 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.629289 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:43Z","lastTransitionTime":"2026-01-28T15:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.638528 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.638741 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.645442 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.663715 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.678738 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.704615 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.722022 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.732492 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.732549 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.732570 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.732601 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.732621 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:43Z","lastTransitionTime":"2026-01-28T15:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.740466 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.760729 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.772387 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.785652 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.801302 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.818008 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.837317 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.838425 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.838679 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.838721 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.838749 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.838773 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:43Z","lastTransitionTime":"2026-01-28T15:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.852241 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.867808 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.885601 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.916789 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.935382 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.941274 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.941356 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.941372 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.941395 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.941412 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:43Z","lastTransitionTime":"2026-01-28T15:03:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.958126 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.978783 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:43 crc kubenswrapper[4981]: I0128 15:03:43.994405 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:43Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.019125 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.032501 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.044681 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.044717 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.044727 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.044742 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.044754 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:44Z","lastTransitionTime":"2026-01-28T15:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.048542 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.067613 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.085681 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.108552 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.147228 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.147276 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.147287 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.147303 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.147313 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:44Z","lastTransitionTime":"2026-01-28T15:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.250841 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.250921 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.250946 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.250997 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.251021 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:44Z","lastTransitionTime":"2026-01-28T15:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.268833 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:44:23.086202618 +0000 UTC Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.354043 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.354088 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.354104 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.354151 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.354166 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:44Z","lastTransitionTime":"2026-01-28T15:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.415638 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.457108 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.457158 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.457173 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.457208 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.457222 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:44Z","lastTransitionTime":"2026-01-28T15:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.561102 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.561147 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.561160 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.561177 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.561217 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:44Z","lastTransitionTime":"2026-01-28T15:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.603641 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.608862 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b"} Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.609806 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.616096 4981 generic.go:334] "Generic (PLEG): container finished" podID="76561bd4-81e0-4978-ac44-fb6bf5f60c7d" containerID="4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7" exitCode=0 Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.616385 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" event={"ID":"76561bd4-81e0-4978-ac44-fb6bf5f60c7d","Type":"ContainerDied","Data":"4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7"} Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.627795 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.645664 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.664550 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.666156 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.666227 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.666240 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.666259 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.666271 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:44Z","lastTransitionTime":"2026-01-28T15:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.683411 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.697115 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.709013 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.777953 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.777997 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.778010 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.778029 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.778043 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:44Z","lastTransitionTime":"2026-01-28T15:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.797077 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.810094 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.821719 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.836523 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.851175 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.866124 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.879209 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.882308 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.882340 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.882353 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.882371 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.882384 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:44Z","lastTransitionTime":"2026-01-28T15:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.895852 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.909551 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.924226 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.943193 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.957523 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.974339 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.980996 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.981063 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.981080 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.981105 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.981124 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:44Z","lastTransitionTime":"2026-01-28T15:03:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:44 crc kubenswrapper[4981]: I0128 15:03:44.990046 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:44 crc kubenswrapper[4981]: E0128 15:03:44.995563 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:44Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:44.999953 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.000001 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.000015 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.000037 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.000050 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:45Z","lastTransitionTime":"2026-01-28T15:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.008752 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.020793 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.025321 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.025388 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.025400 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.025423 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.025437 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:45Z","lastTransitionTime":"2026-01-28T15:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.029484 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.042252 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.042365 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.046767 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.046804 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.046814 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.046832 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.046846 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:45Z","lastTransitionTime":"2026-01-28T15:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.056480 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.060857 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.064350 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.064393 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.064404 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.064419 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.064430 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:45Z","lastTransitionTime":"2026-01-28T15:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.072062 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.078468 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.078678 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.081332 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.081373 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.081384 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.081406 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.081422 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:45Z","lastTransitionTime":"2026-01-28T15:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.088045 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.108607 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.122347 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.183766 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.183812 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.183824 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.183843 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.183857 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:45Z","lastTransitionTime":"2026-01-28T15:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.268973 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 00:44:27.001661188 +0000 UTC Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.286496 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.286533 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.286544 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.286562 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.286574 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:45Z","lastTransitionTime":"2026-01-28T15:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.287835 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.287921 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.287938 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:04:01.287920009 +0000 UTC m=+52.740078250 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.287995 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.288120 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.288142 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.288255 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:04:01.288229927 +0000 UTC m=+52.740388188 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.288288 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:04:01.288274068 +0000 UTC m=+52.740432319 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.318102 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.318163 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.318102 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.318376 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.318572 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.318718 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.389440 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.389535 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.389554 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.389592 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.389603 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.389622 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.389634 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:45Z","lastTransitionTime":"2026-01-28T15:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.389732 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.389798 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.389828 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.389739 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.389919 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.389935 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.389989 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:04:01.389948495 +0000 UTC m=+52.842106766 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:45 crc kubenswrapper[4981]: E0128 15:03:45.390037 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:04:01.390016627 +0000 UTC m=+52.842175098 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.492749 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.492825 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.492843 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.492873 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.492895 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:45Z","lastTransitionTime":"2026-01-28T15:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.595509 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.595569 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.595584 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.595602 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.595616 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:45Z","lastTransitionTime":"2026-01-28T15:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.623414 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" event={"ID":"76561bd4-81e0-4978-ac44-fb6bf5f60c7d","Type":"ContainerStarted","Data":"c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8"} Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.642556 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.657371 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.673487 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.686353 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.701456 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.701538 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.701557 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.701584 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.701606 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:45Z","lastTransitionTime":"2026-01-28T15:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.706148 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.724533 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.754151 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.770445 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.784196 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.800936 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.805057 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.805105 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.805120 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.805142 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.805156 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:45Z","lastTransitionTime":"2026-01-28T15:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.817647 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.834710 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.850952 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.865310 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:45Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.907575 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.907607 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.907616 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.907631 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:45 crc kubenswrapper[4981]: I0128 15:03:45.907642 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:45Z","lastTransitionTime":"2026-01-28T15:03:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.010750 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.010805 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.010816 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.010832 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.010844 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:46Z","lastTransitionTime":"2026-01-28T15:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.113383 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.113468 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.113499 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.113559 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.113586 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:46Z","lastTransitionTime":"2026-01-28T15:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.217313 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.217373 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.217385 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.217408 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.217422 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:46Z","lastTransitionTime":"2026-01-28T15:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.270162 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 15:50:47.094689794 +0000 UTC Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.320837 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.320905 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.320930 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.320966 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.320988 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:46Z","lastTransitionTime":"2026-01-28T15:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.424328 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.424404 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.424428 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.424461 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.424481 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:46Z","lastTransitionTime":"2026-01-28T15:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.527462 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.527515 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.527533 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.527556 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.527575 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:46Z","lastTransitionTime":"2026-01-28T15:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.630176 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.630264 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.630287 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.630314 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.630335 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:46Z","lastTransitionTime":"2026-01-28T15:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.732701 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.732759 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.732769 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.732788 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.732800 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:46Z","lastTransitionTime":"2026-01-28T15:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.835729 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.835798 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.835822 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.835852 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.835873 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:46Z","lastTransitionTime":"2026-01-28T15:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.938976 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.939046 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.939057 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.939081 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:46 crc kubenswrapper[4981]: I0128 15:03:46.939094 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:46Z","lastTransitionTime":"2026-01-28T15:03:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.041797 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.041869 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.041898 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.041926 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.041946 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:47Z","lastTransitionTime":"2026-01-28T15:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.145708 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.145763 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.145778 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.145801 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.145814 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:47Z","lastTransitionTime":"2026-01-28T15:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.248602 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.248713 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.248746 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.248781 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.248801 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:47Z","lastTransitionTime":"2026-01-28T15:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.271254 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 07:17:01.697970253 +0000 UTC Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.317984 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.318025 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.318057 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:47 crc kubenswrapper[4981]: E0128 15:03:47.318252 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:47 crc kubenswrapper[4981]: E0128 15:03:47.318420 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:47 crc kubenswrapper[4981]: E0128 15:03:47.318590 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.352584 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.352662 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.352689 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.352723 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.352744 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:47Z","lastTransitionTime":"2026-01-28T15:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.456519 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.456604 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.456623 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.456649 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.456667 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:47Z","lastTransitionTime":"2026-01-28T15:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.558989 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.559038 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.559051 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.559070 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.559085 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:47Z","lastTransitionTime":"2026-01-28T15:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.661442 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.661500 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.661527 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.661544 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.661557 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:47Z","lastTransitionTime":"2026-01-28T15:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.764227 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.764278 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.764291 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.764308 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.764320 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:47Z","lastTransitionTime":"2026-01-28T15:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.867653 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.867711 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.867729 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.867755 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.867776 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:47Z","lastTransitionTime":"2026-01-28T15:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.971042 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.971428 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.971511 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.971588 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:47 crc kubenswrapper[4981]: I0128 15:03:47.971652 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:47Z","lastTransitionTime":"2026-01-28T15:03:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.098840 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84"] Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.099425 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.101155 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.101225 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.101237 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.101252 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.101266 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:48Z","lastTransitionTime":"2026-01-28T15:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.103109 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.105025 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.120546 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.123141 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qwm8\" (UniqueName: \"kubernetes.io/projected/a4ddd8a8-aa37-436c-baea-4d2a7017c609-kube-api-access-9qwm8\") pod \"ovnkube-control-plane-749d76644c-snb84\" (UID: \"a4ddd8a8-aa37-436c-baea-4d2a7017c609\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.123381 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a4ddd8a8-aa37-436c-baea-4d2a7017c609-env-overrides\") pod \"ovnkube-control-plane-749d76644c-snb84\" (UID: \"a4ddd8a8-aa37-436c-baea-4d2a7017c609\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.123579 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a4ddd8a8-aa37-436c-baea-4d2a7017c609-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-snb84\" (UID: \"a4ddd8a8-aa37-436c-baea-4d2a7017c609\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.123813 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a4ddd8a8-aa37-436c-baea-4d2a7017c609-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-snb84\" (UID: \"a4ddd8a8-aa37-436c-baea-4d2a7017c609\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.136576 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.150309 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.182224 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.204271 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.204652 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.204709 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.204724 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.204747 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.204768 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:48Z","lastTransitionTime":"2026-01-28T15:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.225023 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a4ddd8a8-aa37-436c-baea-4d2a7017c609-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-snb84\" (UID: \"a4ddd8a8-aa37-436c-baea-4d2a7017c609\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.225149 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qwm8\" (UniqueName: \"kubernetes.io/projected/a4ddd8a8-aa37-436c-baea-4d2a7017c609-kube-api-access-9qwm8\") pod \"ovnkube-control-plane-749d76644c-snb84\" (UID: \"a4ddd8a8-aa37-436c-baea-4d2a7017c609\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.225220 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a4ddd8a8-aa37-436c-baea-4d2a7017c609-env-overrides\") pod \"ovnkube-control-plane-749d76644c-snb84\" (UID: \"a4ddd8a8-aa37-436c-baea-4d2a7017c609\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.225297 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a4ddd8a8-aa37-436c-baea-4d2a7017c609-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-snb84\" (UID: \"a4ddd8a8-aa37-436c-baea-4d2a7017c609\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.230290 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a4ddd8a8-aa37-436c-baea-4d2a7017c609-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-snb84\" (UID: \"a4ddd8a8-aa37-436c-baea-4d2a7017c609\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.230642 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a4ddd8a8-aa37-436c-baea-4d2a7017c609-env-overrides\") pod \"ovnkube-control-plane-749d76644c-snb84\" (UID: \"a4ddd8a8-aa37-436c-baea-4d2a7017c609\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.234724 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.234827 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a4ddd8a8-aa37-436c-baea-4d2a7017c609-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-snb84\" (UID: \"a4ddd8a8-aa37-436c-baea-4d2a7017c609\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.260034 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qwm8\" (UniqueName: \"kubernetes.io/projected/a4ddd8a8-aa37-436c-baea-4d2a7017c609-kube-api-access-9qwm8\") pod \"ovnkube-control-plane-749d76644c-snb84\" (UID: \"a4ddd8a8-aa37-436c-baea-4d2a7017c609\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.265405 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.271683 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 19:27:41.217501258 +0000 UTC Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.288667 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.307858 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.307909 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.307935 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.307956 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.307969 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:48Z","lastTransitionTime":"2026-01-28T15:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.322236 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.338650 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.354079 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.369555 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.390794 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.409971 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.411032 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.411075 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.411089 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.411113 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.411127 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:48Z","lastTransitionTime":"2026-01-28T15:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.417441 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.433943 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:48Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:48 crc kubenswrapper[4981]: W0128 15:03:48.439020 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4ddd8a8_aa37_436c_baea_4d2a7017c609.slice/crio-1fad7528c44fac4134c11a62a5288f142197be99b154fda2bc336c7c07966e87 WatchSource:0}: Error finding container 1fad7528c44fac4134c11a62a5288f142197be99b154fda2bc336c7c07966e87: Status 404 returned error can't find the container with id 1fad7528c44fac4134c11a62a5288f142197be99b154fda2bc336c7c07966e87 Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.514730 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.514810 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.514836 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.514875 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.514902 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:48Z","lastTransitionTime":"2026-01-28T15:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.618367 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.618428 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.618442 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.618464 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.618479 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:48Z","lastTransitionTime":"2026-01-28T15:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.636018 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" event={"ID":"a4ddd8a8-aa37-436c-baea-4d2a7017c609","Type":"ContainerStarted","Data":"1fad7528c44fac4134c11a62a5288f142197be99b154fda2bc336c7c07966e87"} Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.720649 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.720710 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.720724 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.720746 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.720758 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:48Z","lastTransitionTime":"2026-01-28T15:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.823633 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.823684 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.823696 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.823714 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.823731 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:48Z","lastTransitionTime":"2026-01-28T15:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.932913 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.933008 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.933044 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.933073 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:48 crc kubenswrapper[4981]: I0128 15:03:48.933093 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:48Z","lastTransitionTime":"2026-01-28T15:03:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.035731 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.035784 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.035808 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.035844 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.035868 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:49Z","lastTransitionTime":"2026-01-28T15:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.139031 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.139092 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.139105 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.139126 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.139139 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:49Z","lastTransitionTime":"2026-01-28T15:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.214996 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-8rsts"] Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.215797 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:03:49 crc kubenswrapper[4981]: E0128 15:03:49.215923 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.235850 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.238693 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzdzb\" (UniqueName: \"kubernetes.io/projected/d5fda60c-a87b-4810-81df-4c7717d34ac1-kube-api-access-zzdzb\") pod \"network-metrics-daemon-8rsts\" (UID: \"d5fda60c-a87b-4810-81df-4c7717d34ac1\") " pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.239013 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs\") pod \"network-metrics-daemon-8rsts\" (UID: \"d5fda60c-a87b-4810-81df-4c7717d34ac1\") " pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.242290 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.242346 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.242362 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.242384 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.242399 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:49Z","lastTransitionTime":"2026-01-28T15:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.250906 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.277764 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 05:35:27.493285655 +0000 UTC Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.277914 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.298834 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.319456 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:49 crc kubenswrapper[4981]: E0128 15:03:49.319822 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.319890 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:49 crc kubenswrapper[4981]: E0128 15:03:49.320028 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.320836 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:49 crc kubenswrapper[4981]: E0128 15:03:49.321091 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.321378 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.340115 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.357701 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.374309 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.388019 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.403463 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.421079 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.438212 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.454137 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.466004 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.477600 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.499710 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.515115 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.529944 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.543827 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.559609 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.576509 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.588880 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.604070 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.619014 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.633893 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.646956 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.658562 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.678675 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.691588 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.704169 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.724501 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.739856 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.897158 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzdzb\" (UniqueName: \"kubernetes.io/projected/d5fda60c-a87b-4810-81df-4c7717d34ac1-kube-api-access-zzdzb\") pod \"network-metrics-daemon-8rsts\" (UID: \"d5fda60c-a87b-4810-81df-4c7717d34ac1\") " pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.897227 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs\") pod \"network-metrics-daemon-8rsts\" (UID: \"d5fda60c-a87b-4810-81df-4c7717d34ac1\") " pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:03:49 crc kubenswrapper[4981]: E0128 15:03:49.897377 4981 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:03:49 crc kubenswrapper[4981]: E0128 15:03:49.897440 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs podName:d5fda60c-a87b-4810-81df-4c7717d34ac1 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:50.397421714 +0000 UTC m=+41.849579955 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs") pod "network-metrics-daemon-8rsts" (UID: "d5fda60c-a87b-4810-81df-4c7717d34ac1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.907880 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.907938 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.907951 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.907971 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.907985 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:49Z","lastTransitionTime":"2026-01-28T15:03:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.909640 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" event={"ID":"a4ddd8a8-aa37-436c-baea-4d2a7017c609","Type":"ContainerStarted","Data":"887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af"} Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.914594 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovnkube-controller/0.log" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.918279 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerID="8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235" exitCode=1 Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.918337 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerDied","Data":"8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235"} Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.919394 4981 scope.go:117] "RemoveContainer" containerID="8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.932381 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzdzb\" (UniqueName: \"kubernetes.io/projected/d5fda60c-a87b-4810-81df-4c7717d34ac1-kube-api-access-zzdzb\") pod \"network-metrics-daemon-8rsts\" (UID: \"d5fda60c-a87b-4810-81df-4c7717d34ac1\") " pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.944242 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.972491 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:49 crc kubenswrapper[4981]: I0128 15:03:49.987980 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:49Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.004916 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.012515 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.012578 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.012591 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.012611 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.012626 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:50Z","lastTransitionTime":"2026-01-28T15:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.018298 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.038329 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"message\\\":\\\" 6326 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:03:48.603391 6326 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:03:48.603416 6326 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:03:48.604942 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:03:48.604966 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:03:48.605015 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:03:48.605044 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:03:48.605080 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:03:48.605098 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:03:48.605103 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:03:48.605126 6326 factory.go:656] Stopping watch factory\\\\nI0128 15:03:48.605148 6326 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:03:48.605177 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:03:48.605222 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:03:48.605218 6326 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:03:48.605233 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.051766 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.068585 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.089544 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.103875 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.119543 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.119604 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.119617 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.119642 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.119654 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:50Z","lastTransitionTime":"2026-01-28T15:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.125599 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.143213 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.162553 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.179132 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.192993 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.212609 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.222953 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.223003 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.223019 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.223046 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.223062 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:50Z","lastTransitionTime":"2026-01-28T15:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.278027 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:52:40.476430436 +0000 UTC Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.325805 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.325852 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.325863 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.325882 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.325901 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:50Z","lastTransitionTime":"2026-01-28T15:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.402877 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs\") pod \"network-metrics-daemon-8rsts\" (UID: \"d5fda60c-a87b-4810-81df-4c7717d34ac1\") " pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:03:50 crc kubenswrapper[4981]: E0128 15:03:50.403109 4981 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:03:50 crc kubenswrapper[4981]: E0128 15:03:50.403181 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs podName:d5fda60c-a87b-4810-81df-4c7717d34ac1 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:51.403162035 +0000 UTC m=+42.855320276 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs") pod "network-metrics-daemon-8rsts" (UID: "d5fda60c-a87b-4810-81df-4c7717d34ac1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.429390 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.429458 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.429476 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.429505 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.429526 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:50Z","lastTransitionTime":"2026-01-28T15:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.532447 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.532521 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.532540 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.532572 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.532592 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:50Z","lastTransitionTime":"2026-01-28T15:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.642328 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.642385 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.642396 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.642417 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.642431 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:50Z","lastTransitionTime":"2026-01-28T15:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.746140 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.746217 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.746232 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.746256 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.746270 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:50Z","lastTransitionTime":"2026-01-28T15:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.849826 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.849885 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.849897 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.849919 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.849930 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:50Z","lastTransitionTime":"2026-01-28T15:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.923929 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" event={"ID":"a4ddd8a8-aa37-436c-baea-4d2a7017c609","Type":"ContainerStarted","Data":"5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02"} Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.926311 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovnkube-controller/0.log" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.930531 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerStarted","Data":"25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f"} Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.931430 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.941529 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.954887 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.954919 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.954928 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.954946 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.954960 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:50Z","lastTransitionTime":"2026-01-28T15:03:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.960481 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:50 crc kubenswrapper[4981]: I0128 15:03:50.982411 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.004541 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:50Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.024461 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.046795 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.058918 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.058975 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.058989 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.059009 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.059022 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:51Z","lastTransitionTime":"2026-01-28T15:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.066337 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.082743 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.103574 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.129838 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.149008 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.162136 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.162176 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.162215 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.162237 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.162249 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:51Z","lastTransitionTime":"2026-01-28T15:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.166179 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.181081 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.195033 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.207507 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.232853 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"message\\\":\\\" 6326 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:03:48.603391 6326 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:03:48.603416 6326 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:03:48.604942 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:03:48.604966 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:03:48.605015 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:03:48.605044 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:03:48.605080 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:03:48.605098 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:03:48.605103 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:03:48.605126 6326 factory.go:656] Stopping watch factory\\\\nI0128 15:03:48.605148 6326 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:03:48.605177 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:03:48.605222 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:03:48.605218 6326 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:03:48.605233 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.247845 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.265030 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.265521 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.265567 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.265581 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.265602 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.265616 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:51Z","lastTransitionTime":"2026-01-28T15:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.278598 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 01:08:53.316659327 +0000 UTC Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.281401 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.296564 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.317815 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:03:51 crc kubenswrapper[4981]: E0128 15:03:51.317971 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.318421 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.318515 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:51 crc kubenswrapper[4981]: E0128 15:03:51.318599 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:51 crc kubenswrapper[4981]: E0128 15:03:51.318697 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.319436 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:51 crc kubenswrapper[4981]: E0128 15:03:51.319695 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.322765 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.340327 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.366830 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"message\\\":\\\" 6326 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:03:48.603391 6326 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:03:48.603416 6326 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:03:48.604942 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:03:48.604966 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:03:48.605015 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:03:48.605044 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:03:48.605080 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:03:48.605098 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:03:48.605103 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:03:48.605126 6326 factory.go:656] Stopping watch factory\\\\nI0128 15:03:48.605148 6326 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:03:48.605177 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:03:48.605222 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:03:48.605218 6326 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:03:48.605233 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.368597 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.368721 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.368799 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.368891 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.368964 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:51Z","lastTransitionTime":"2026-01-28T15:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.384724 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.398752 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.417692 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.418329 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs\") pod \"network-metrics-daemon-8rsts\" (UID: \"d5fda60c-a87b-4810-81df-4c7717d34ac1\") " pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:03:51 crc kubenswrapper[4981]: E0128 15:03:51.418492 4981 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:03:51 crc kubenswrapper[4981]: E0128 15:03:51.418590 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs podName:d5fda60c-a87b-4810-81df-4c7717d34ac1 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:53.418571278 +0000 UTC m=+44.870729519 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs") pod "network-metrics-daemon-8rsts" (UID: "d5fda60c-a87b-4810-81df-4c7717d34ac1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.434954 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.450765 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.472544 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.472600 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.472619 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.472644 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.472661 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:51Z","lastTransitionTime":"2026-01-28T15:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.474619 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.492661 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.514026 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.527229 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.575980 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.576048 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.576058 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.576096 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.576111 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:51Z","lastTransitionTime":"2026-01-28T15:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.680088 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.680148 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.680160 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.680183 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.680221 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:51Z","lastTransitionTime":"2026-01-28T15:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.783698 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.783785 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.783803 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.783849 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.783866 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:51Z","lastTransitionTime":"2026-01-28T15:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.887634 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.887689 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.887701 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.887718 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.887731 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:51Z","lastTransitionTime":"2026-01-28T15:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.937579 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovnkube-controller/1.log" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.938624 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovnkube-controller/0.log" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.942128 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerID="25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f" exitCode=1 Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.942257 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerDied","Data":"25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f"} Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.942373 4981 scope.go:117] "RemoveContainer" containerID="8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.943085 4981 scope.go:117] "RemoveContainer" containerID="25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f" Jan 28 15:03:51 crc kubenswrapper[4981]: E0128 15:03:51.943347 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.960360 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.984711 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a13d600015838dce90dc9e91f718bd160ebaeb054d5ed0be6a3cda6a2f30235\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"message\\\":\\\" 6326 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 15:03:48.603391 6326 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 15:03:48.603416 6326 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 15:03:48.604942 6326 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:03:48.604966 6326 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:03:48.605015 6326 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:03:48.605044 6326 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:03:48.605080 6326 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:03:48.605098 6326 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:03:48.605103 6326 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:03:48.605126 6326 factory.go:656] Stopping watch factory\\\\nI0128 15:03:48.605148 6326 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:03:48.605177 6326 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:03:48.605222 6326 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:03:48.605218 6326 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:03:48.605233 6326 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:03:51Z\\\",\\\"message\\\":\\\"/factory.go:160\\\\nI0128 15:03:51.183497 6565 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.183846 6565 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.183947 6565 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.184344 6565 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:03:51.184379 6565 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:03:51.184406 6565 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:03:51.184416 6565 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:03:51.184442 6565 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:03:51.184483 6565 factory.go:656] Stopping watch factory\\\\nI0128 15:03:51.184508 6565 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:03:51.184520 6565 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:03:51.184528 6565 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:03:51.184536 6565 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:03:51.184543 6565 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:51Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.990591 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.990653 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.990668 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.990690 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:51 crc kubenswrapper[4981]: I0128 15:03:51.990705 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:51Z","lastTransitionTime":"2026-01-28T15:03:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.005288 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.027427 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.048234 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.065435 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.089079 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.095118 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.095221 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.095240 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.095267 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.095285 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:52Z","lastTransitionTime":"2026-01-28T15:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.105696 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.124878 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.140925 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.157138 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.177623 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.194660 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.204835 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.204884 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.204897 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.204919 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.204933 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:52Z","lastTransitionTime":"2026-01-28T15:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.215594 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.237213 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.257000 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.279451 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:53:27.643363089 +0000 UTC Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.308782 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.308824 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.308835 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.308857 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.308871 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:52Z","lastTransitionTime":"2026-01-28T15:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.412679 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.412719 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.412731 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.412750 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.412762 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:52Z","lastTransitionTime":"2026-01-28T15:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.515723 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.515772 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.515781 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.515798 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.515811 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:52Z","lastTransitionTime":"2026-01-28T15:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.618898 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.618984 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.619010 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.619044 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.619065 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:52Z","lastTransitionTime":"2026-01-28T15:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.721971 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.722050 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.722076 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.722107 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.722132 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:52Z","lastTransitionTime":"2026-01-28T15:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.825721 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.825776 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.825791 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.825812 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.825829 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:52Z","lastTransitionTime":"2026-01-28T15:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.929403 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.929476 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.929490 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.929523 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.929541 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:52Z","lastTransitionTime":"2026-01-28T15:03:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.947627 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovnkube-controller/1.log" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.953095 4981 scope.go:117] "RemoveContainer" containerID="25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f" Jan 28 15:03:52 crc kubenswrapper[4981]: E0128 15:03:52.953406 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.974628 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:52 crc kubenswrapper[4981]: I0128 15:03:52.991572 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:52Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.010495 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.029504 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.033100 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.033147 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.033164 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.033219 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.033239 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:53Z","lastTransitionTime":"2026-01-28T15:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.047140 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.066720 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.085504 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.105967 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.123360 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.136927 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.136974 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.137029 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.137051 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.137096 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:53Z","lastTransitionTime":"2026-01-28T15:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.143069 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.161164 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.177710 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.192529 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.205623 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.217774 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.241625 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.241684 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.241701 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.241724 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.241744 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:53Z","lastTransitionTime":"2026-01-28T15:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.253736 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:03:51Z\\\",\\\"message\\\":\\\"/factory.go:160\\\\nI0128 15:03:51.183497 6565 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.183846 6565 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.183947 6565 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.184344 6565 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:03:51.184379 6565 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:03:51.184406 6565 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:03:51.184416 6565 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:03:51.184442 6565 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:03:51.184483 6565 factory.go:656] Stopping watch factory\\\\nI0128 15:03:51.184508 6565 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:03:51.184520 6565 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:03:51.184528 6565 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:03:51.184536 6565 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:03:51.184543 6565 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:53Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.280560 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:40:20.197846805 +0000 UTC Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.318071 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:53 crc kubenswrapper[4981]: E0128 15:03:53.318267 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.318809 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.318913 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:53 crc kubenswrapper[4981]: E0128 15:03:53.318992 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.319044 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:03:53 crc kubenswrapper[4981]: E0128 15:03:53.319113 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:03:53 crc kubenswrapper[4981]: E0128 15:03:53.319219 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.344483 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.344554 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.344563 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.344584 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.344598 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:53Z","lastTransitionTime":"2026-01-28T15:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.439778 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs\") pod \"network-metrics-daemon-8rsts\" (UID: \"d5fda60c-a87b-4810-81df-4c7717d34ac1\") " pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:03:53 crc kubenswrapper[4981]: E0128 15:03:53.440002 4981 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:03:53 crc kubenswrapper[4981]: E0128 15:03:53.440108 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs podName:d5fda60c-a87b-4810-81df-4c7717d34ac1 nodeName:}" failed. No retries permitted until 2026-01-28 15:03:57.440086504 +0000 UTC m=+48.892244815 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs") pod "network-metrics-daemon-8rsts" (UID: "d5fda60c-a87b-4810-81df-4c7717d34ac1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.447912 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.447960 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.447975 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.447995 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.448008 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:53Z","lastTransitionTime":"2026-01-28T15:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.550725 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.550805 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.550821 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.550851 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.550871 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:53Z","lastTransitionTime":"2026-01-28T15:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.654347 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.654413 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.654431 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.654459 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.654480 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:53Z","lastTransitionTime":"2026-01-28T15:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.757838 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.757909 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.757920 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.757942 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.757953 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:53Z","lastTransitionTime":"2026-01-28T15:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.860248 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.860303 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.860318 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.860340 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.860353 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:53Z","lastTransitionTime":"2026-01-28T15:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.963298 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.963361 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.963376 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.963397 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:53 crc kubenswrapper[4981]: I0128 15:03:53.963409 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:53Z","lastTransitionTime":"2026-01-28T15:03:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.066892 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.066971 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.066989 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.067017 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.067043 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:54Z","lastTransitionTime":"2026-01-28T15:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.170280 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.170350 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.170369 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.170396 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.170415 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:54Z","lastTransitionTime":"2026-01-28T15:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.274639 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.274701 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.274720 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.274743 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.274761 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:54Z","lastTransitionTime":"2026-01-28T15:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.281149 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:38:54.190345926 +0000 UTC Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.377313 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.377359 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.377372 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.377392 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.377405 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:54Z","lastTransitionTime":"2026-01-28T15:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.481642 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.481705 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.481719 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.481747 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.481766 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:54Z","lastTransitionTime":"2026-01-28T15:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.507167 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.525251 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.539572 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.557620 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.583387 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:03:51Z\\\",\\\"message\\\":\\\"/factory.go:160\\\\nI0128 15:03:51.183497 6565 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.183846 6565 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.183947 6565 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.184344 6565 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:03:51.184379 6565 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:03:51.184406 6565 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:03:51.184416 6565 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:03:51.184442 6565 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:03:51.184483 6565 factory.go:656] Stopping watch factory\\\\nI0128 15:03:51.184508 6565 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:03:51.184520 6565 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:03:51.184528 6565 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:03:51.184536 6565 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:03:51.184543 6565 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.585240 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.585292 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.585303 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.585320 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.585331 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:54Z","lastTransitionTime":"2026-01-28T15:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.600826 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.621864 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.639302 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.656806 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.680817 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.688609 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.688695 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.688725 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.688761 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.688788 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:54Z","lastTransitionTime":"2026-01-28T15:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.703027 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.720333 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.739491 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.753108 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.774039 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.789393 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.791433 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.791474 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.791488 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.791510 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.791527 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:54Z","lastTransitionTime":"2026-01-28T15:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.804043 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:54Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.894537 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.894622 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.894642 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.894674 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.894692 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:54Z","lastTransitionTime":"2026-01-28T15:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.997902 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.997968 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.997988 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.998015 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:54 crc kubenswrapper[4981]: I0128 15:03:54.998036 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:54Z","lastTransitionTime":"2026-01-28T15:03:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.101140 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.101180 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.101210 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.101224 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.101234 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:55Z","lastTransitionTime":"2026-01-28T15:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.192530 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.192606 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.192624 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.192650 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.192667 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:55Z","lastTransitionTime":"2026-01-28T15:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:55 crc kubenswrapper[4981]: E0128 15:03:55.214072 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.219493 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.219625 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.219646 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.219672 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.219720 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:55Z","lastTransitionTime":"2026-01-28T15:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:55 crc kubenswrapper[4981]: E0128 15:03:55.242455 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.247957 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.248062 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.248087 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.248144 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.248161 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:55Z","lastTransitionTime":"2026-01-28T15:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:55 crc kubenswrapper[4981]: E0128 15:03:55.270801 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.275355 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.275422 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.275447 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.275482 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.275526 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:55Z","lastTransitionTime":"2026-01-28T15:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.281699 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 11:40:33.612583724 +0000 UTC Jan 28 15:03:55 crc kubenswrapper[4981]: E0128 15:03:55.295408 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.299946 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.299996 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.300007 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.300021 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.300031 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:55Z","lastTransitionTime":"2026-01-28T15:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.319533 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:55 crc kubenswrapper[4981]: E0128 15:03:55.319775 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.320276 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:55 crc kubenswrapper[4981]: E0128 15:03:55.320463 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.320609 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:03:55 crc kubenswrapper[4981]: E0128 15:03:55.320923 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.320609 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:55 crc kubenswrapper[4981]: E0128 15:03:55.321135 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:55 crc kubenswrapper[4981]: E0128 15:03:55.322283 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:55Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:55 crc kubenswrapper[4981]: E0128 15:03:55.322568 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.328020 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.328085 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.328111 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.328141 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.328166 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:55Z","lastTransitionTime":"2026-01-28T15:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.470102 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.470628 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.470681 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.470706 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.470723 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:55Z","lastTransitionTime":"2026-01-28T15:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.573900 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.573960 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.573977 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.574004 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.574021 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:55Z","lastTransitionTime":"2026-01-28T15:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.677230 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.677284 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.677307 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.677332 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.677350 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:55Z","lastTransitionTime":"2026-01-28T15:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.780104 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.780164 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.780212 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.780242 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.780262 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:55Z","lastTransitionTime":"2026-01-28T15:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.882912 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.882951 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.882960 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.882975 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.882987 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:55Z","lastTransitionTime":"2026-01-28T15:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.985357 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.985424 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.985435 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.985450 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:55 crc kubenswrapper[4981]: I0128 15:03:55.985479 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:55Z","lastTransitionTime":"2026-01-28T15:03:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.088913 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.089012 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.089062 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.089093 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.089111 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:56Z","lastTransitionTime":"2026-01-28T15:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.192628 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.192692 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.192713 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.192739 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.192758 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:56Z","lastTransitionTime":"2026-01-28T15:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.281921 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 16:37:56.640865447 +0000 UTC Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.295834 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.295896 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.295913 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.295939 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.295956 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:56Z","lastTransitionTime":"2026-01-28T15:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.399243 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.399333 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.399358 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.399390 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.399414 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:56Z","lastTransitionTime":"2026-01-28T15:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.502684 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.502748 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.502770 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.502799 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.502823 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:56Z","lastTransitionTime":"2026-01-28T15:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.606153 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.606263 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.606282 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.606310 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.606330 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:56Z","lastTransitionTime":"2026-01-28T15:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.709711 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.709802 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.709821 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.709844 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.709864 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:56Z","lastTransitionTime":"2026-01-28T15:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.817640 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.817707 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.817726 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.817755 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.817775 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:56Z","lastTransitionTime":"2026-01-28T15:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.921057 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.921109 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.921123 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.921142 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:56 crc kubenswrapper[4981]: I0128 15:03:56.921156 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:56Z","lastTransitionTime":"2026-01-28T15:03:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.024140 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.024179 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.024222 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.024238 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.024249 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:57Z","lastTransitionTime":"2026-01-28T15:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.128089 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.128122 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.128131 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.128144 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.128154 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:57Z","lastTransitionTime":"2026-01-28T15:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.231135 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.231630 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.231852 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.232102 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.232659 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:57Z","lastTransitionTime":"2026-01-28T15:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.282083 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 20:30:16.030893661 +0000 UTC Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.317975 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.318032 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.318047 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:57 crc kubenswrapper[4981]: E0128 15:03:57.318354 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:03:57 crc kubenswrapper[4981]: E0128 15:03:57.318495 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:57 crc kubenswrapper[4981]: E0128 15:03:57.318670 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.319332 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:57 crc kubenswrapper[4981]: E0128 15:03:57.319747 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.335805 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.336125 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.336328 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.336638 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.337178 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:57Z","lastTransitionTime":"2026-01-28T15:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.441658 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.441731 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.441751 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.441780 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.441803 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:57Z","lastTransitionTime":"2026-01-28T15:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.494077 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs\") pod \"network-metrics-daemon-8rsts\" (UID: \"d5fda60c-a87b-4810-81df-4c7717d34ac1\") " pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:03:57 crc kubenswrapper[4981]: E0128 15:03:57.494402 4981 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:03:57 crc kubenswrapper[4981]: E0128 15:03:57.494481 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs podName:d5fda60c-a87b-4810-81df-4c7717d34ac1 nodeName:}" failed. No retries permitted until 2026-01-28 15:04:05.494457962 +0000 UTC m=+56.946616243 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs") pod "network-metrics-daemon-8rsts" (UID: "d5fda60c-a87b-4810-81df-4c7717d34ac1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.545633 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.545717 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.545742 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.545773 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.545795 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:57Z","lastTransitionTime":"2026-01-28T15:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.649173 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.649597 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.649702 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.649816 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.649936 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:57Z","lastTransitionTime":"2026-01-28T15:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.753279 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.753377 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.753393 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.753411 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.753425 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:57Z","lastTransitionTime":"2026-01-28T15:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.856137 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.856537 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.856630 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.856743 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.856849 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:57Z","lastTransitionTime":"2026-01-28T15:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.960032 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.960097 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.960117 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.960143 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:57 crc kubenswrapper[4981]: I0128 15:03:57.960162 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:57Z","lastTransitionTime":"2026-01-28T15:03:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.063827 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.063915 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.063938 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.063971 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.063992 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:58Z","lastTransitionTime":"2026-01-28T15:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.166328 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.166803 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.166879 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.166975 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.167048 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:58Z","lastTransitionTime":"2026-01-28T15:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.269619 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.269875 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.270136 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.270328 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.270480 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:58Z","lastTransitionTime":"2026-01-28T15:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.283307 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:29:15.136784798 +0000 UTC Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.374399 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.374457 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.374476 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.374505 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.374527 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:58Z","lastTransitionTime":"2026-01-28T15:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.478488 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.478965 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.479162 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.479370 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.479491 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:58Z","lastTransitionTime":"2026-01-28T15:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.582891 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.582933 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.582970 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.582987 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.583000 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:58Z","lastTransitionTime":"2026-01-28T15:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.686545 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.686655 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.686670 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.686697 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.686715 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:58Z","lastTransitionTime":"2026-01-28T15:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.790356 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.790428 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.790449 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.790479 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.790497 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:58Z","lastTransitionTime":"2026-01-28T15:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.893765 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.894508 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.894668 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.894811 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.894949 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:58Z","lastTransitionTime":"2026-01-28T15:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.998821 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.998925 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.998951 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.998984 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:58 crc kubenswrapper[4981]: I0128 15:03:58.999008 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:58Z","lastTransitionTime":"2026-01-28T15:03:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.103615 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.103686 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.103704 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.103736 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.103757 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:59Z","lastTransitionTime":"2026-01-28T15:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.207002 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.207086 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.207108 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.207138 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.207160 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:59Z","lastTransitionTime":"2026-01-28T15:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.284139 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:03:10.380552504 +0000 UTC Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.311383 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.311426 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.311440 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.311460 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.311474 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:59Z","lastTransitionTime":"2026-01-28T15:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.318406 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.318492 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.318843 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:03:59 crc kubenswrapper[4981]: E0128 15:03:59.318878 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.318558 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:03:59 crc kubenswrapper[4981]: E0128 15:03:59.319272 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:03:59 crc kubenswrapper[4981]: E0128 15:03:59.319665 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:03:59 crc kubenswrapper[4981]: E0128 15:03:59.320567 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.345713 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.368041 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.388105 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.402371 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.414994 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.415091 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.415105 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.415126 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.415139 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:59Z","lastTransitionTime":"2026-01-28T15:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.420639 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.434085 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.448813 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.461003 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.482017 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.497160 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.511341 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.518807 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.518863 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.518882 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.518909 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.518928 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:59Z","lastTransitionTime":"2026-01-28T15:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.541973 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:03:51Z\\\",\\\"message\\\":\\\"/factory.go:160\\\\nI0128 15:03:51.183497 6565 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.183846 6565 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.183947 6565 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.184344 6565 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:03:51.184379 6565 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:03:51.184406 6565 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:03:51.184416 6565 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:03:51.184442 6565 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:03:51.184483 6565 factory.go:656] Stopping watch factory\\\\nI0128 15:03:51.184508 6565 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:03:51.184520 6565 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:03:51.184528 6565 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:03:51.184536 6565 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:03:51.184543 6565 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.563380 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.580708 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.596697 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.613750 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:03:59Z is after 2025-08-24T17:21:41Z" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.621359 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.621399 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.621414 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.621431 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.621443 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:59Z","lastTransitionTime":"2026-01-28T15:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.724392 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.724428 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.724438 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.724453 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.724463 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:59Z","lastTransitionTime":"2026-01-28T15:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.827652 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.827750 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.827788 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.827817 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.827838 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:59Z","lastTransitionTime":"2026-01-28T15:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.931073 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.931143 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.931171 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.931232 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:03:59 crc kubenswrapper[4981]: I0128 15:03:59.931257 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:03:59Z","lastTransitionTime":"2026-01-28T15:03:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.034735 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.034816 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.034830 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.034855 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.034869 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:00Z","lastTransitionTime":"2026-01-28T15:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.138598 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.138665 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.138676 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.138699 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.138713 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:00Z","lastTransitionTime":"2026-01-28T15:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.241977 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.242038 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.242056 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.242081 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.242101 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:00Z","lastTransitionTime":"2026-01-28T15:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.285471 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 16:35:00.275579047 +0000 UTC Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.345664 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.345717 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.345728 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.345747 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.345758 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:00Z","lastTransitionTime":"2026-01-28T15:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.449781 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.449843 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.449854 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.449876 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.449890 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:00Z","lastTransitionTime":"2026-01-28T15:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.553507 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.553574 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.553599 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.554251 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.554318 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:00Z","lastTransitionTime":"2026-01-28T15:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.658658 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.658709 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.658722 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.658745 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.658758 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:00Z","lastTransitionTime":"2026-01-28T15:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.762012 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.762073 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.762093 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.762118 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.762135 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:00Z","lastTransitionTime":"2026-01-28T15:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.865790 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.865833 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.865844 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.865863 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.865876 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:00Z","lastTransitionTime":"2026-01-28T15:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.969328 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.969685 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.969782 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.969886 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:00 crc kubenswrapper[4981]: I0128 15:04:00.970006 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:00Z","lastTransitionTime":"2026-01-28T15:04:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.073616 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.073657 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.073668 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.073714 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.073729 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:01Z","lastTransitionTime":"2026-01-28T15:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.177709 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.177805 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.177827 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.177863 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.177887 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:01Z","lastTransitionTime":"2026-01-28T15:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.281154 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.281255 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.281276 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.281310 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.281330 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:01Z","lastTransitionTime":"2026-01-28T15:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.286277 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:28:41.901137754 +0000 UTC Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.323469 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.323535 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.323547 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.323777 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.323825 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.323858 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.324071 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.324239 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.338819 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.339030 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:04:33.338997676 +0000 UTC m=+84.791155957 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.339268 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.339341 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.339462 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.339461 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.339556 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:04:33.339534079 +0000 UTC m=+84.791692360 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.339596 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:04:33.33957808 +0000 UTC m=+84.791736361 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.384489 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.384580 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.384618 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.384654 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.384682 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:01Z","lastTransitionTime":"2026-01-28T15:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.440012 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.440127 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.440339 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.440367 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.440376 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.440401 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.440408 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.440425 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.440504 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:04:33.440480789 +0000 UTC m=+84.892639070 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:04:01 crc kubenswrapper[4981]: E0128 15:04:01.440533 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:04:33.44052136 +0000 UTC m=+84.892679631 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.488355 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.488418 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.488434 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.488454 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.488469 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:01Z","lastTransitionTime":"2026-01-28T15:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.592121 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.592230 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.592257 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.592295 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.592320 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:01Z","lastTransitionTime":"2026-01-28T15:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.695887 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.695940 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.695955 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.695974 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.696007 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:01Z","lastTransitionTime":"2026-01-28T15:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.799117 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.799176 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.799216 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.799238 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.799259 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:01Z","lastTransitionTime":"2026-01-28T15:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.903109 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.903166 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.903217 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.903246 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:01 crc kubenswrapper[4981]: I0128 15:04:01.903270 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:01Z","lastTransitionTime":"2026-01-28T15:04:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.007654 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.007758 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.007777 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.007833 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.007855 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:02Z","lastTransitionTime":"2026-01-28T15:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.111044 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.111101 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.111118 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.111141 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.111158 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:02Z","lastTransitionTime":"2026-01-28T15:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.214241 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.214307 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.214324 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.214347 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.214369 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:02Z","lastTransitionTime":"2026-01-28T15:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.286771 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 03:09:04.113344538 +0000 UTC Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.317057 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.317119 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.317142 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.317171 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.317238 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:02Z","lastTransitionTime":"2026-01-28T15:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.420580 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.420666 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.420689 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.420721 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.420745 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:02Z","lastTransitionTime":"2026-01-28T15:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.540907 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.540965 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.540982 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.541006 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.541023 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:02Z","lastTransitionTime":"2026-01-28T15:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.644585 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.644645 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.644663 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.644686 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.644703 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:02Z","lastTransitionTime":"2026-01-28T15:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.747894 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.747962 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.747985 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.748018 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.748042 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:02Z","lastTransitionTime":"2026-01-28T15:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.851715 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.851773 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.851790 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.851815 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.851832 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:02Z","lastTransitionTime":"2026-01-28T15:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.955172 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.955270 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.955288 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.955363 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:02 crc kubenswrapper[4981]: I0128 15:04:02.955383 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:02Z","lastTransitionTime":"2026-01-28T15:04:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.058312 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.058376 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.058402 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.058435 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.058460 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:03Z","lastTransitionTime":"2026-01-28T15:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.160626 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.160702 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.160723 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.160754 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.160777 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:03Z","lastTransitionTime":"2026-01-28T15:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.263466 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.263665 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.263691 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.263719 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.263736 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:03Z","lastTransitionTime":"2026-01-28T15:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.287618 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 09:23:03.768173355 +0000 UTC Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.318350 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.318438 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.318380 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.318635 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:03 crc kubenswrapper[4981]: E0128 15:04:03.318653 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:03 crc kubenswrapper[4981]: E0128 15:04:03.318757 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:03 crc kubenswrapper[4981]: E0128 15:04:03.318880 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:03 crc kubenswrapper[4981]: E0128 15:04:03.319033 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.367220 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.367299 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.367324 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.367354 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.367380 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:03Z","lastTransitionTime":"2026-01-28T15:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.470318 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.470383 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.470400 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.470428 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.470449 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:03Z","lastTransitionTime":"2026-01-28T15:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.573259 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.573325 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.573343 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.573368 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.573393 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:03Z","lastTransitionTime":"2026-01-28T15:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.677822 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.677892 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.677915 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.677941 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.677961 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:03Z","lastTransitionTime":"2026-01-28T15:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.781581 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.781661 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.781682 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.781709 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.781728 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:03Z","lastTransitionTime":"2026-01-28T15:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.884514 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.884580 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.884603 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.884627 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.884645 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:03Z","lastTransitionTime":"2026-01-28T15:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.988111 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.988177 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.988223 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.988248 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:03 crc kubenswrapper[4981]: I0128 15:04:03.988268 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:03Z","lastTransitionTime":"2026-01-28T15:04:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.076654 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.091435 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.091476 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.091495 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.091520 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.091537 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:04Z","lastTransitionTime":"2026-01-28T15:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.093449 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.113048 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:03:51Z\\\",\\\"message\\\":\\\"/factory.go:160\\\\nI0128 15:03:51.183497 6565 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.183846 6565 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.183947 6565 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.184344 6565 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:03:51.184379 6565 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:03:51.184406 6565 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:03:51.184416 6565 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:03:51.184442 6565 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:03:51.184483 6565 factory.go:656] Stopping watch factory\\\\nI0128 15:03:51.184508 6565 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:03:51.184520 6565 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:03:51.184528 6565 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:03:51.184536 6565 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:03:51.184543 6565 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.136552 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.159154 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.180333 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.195039 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.195093 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.195109 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.195137 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.195154 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:04Z","lastTransitionTime":"2026-01-28T15:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.196440 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.211784 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.229459 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.245342 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.263674 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.285951 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.287815 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 08:50:02.42636271 +0000 UTC Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.298465 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.298615 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.298731 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.298829 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.298941 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:04Z","lastTransitionTime":"2026-01-28T15:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.308455 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.327296 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.344291 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.361435 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.379763 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.400787 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:04Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.403446 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.403514 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.403525 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.403553 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.403569 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:04Z","lastTransitionTime":"2026-01-28T15:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.506641 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.506713 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.506732 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.506758 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.506775 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:04Z","lastTransitionTime":"2026-01-28T15:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.612069 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.612136 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.612154 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.612181 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.612227 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:04Z","lastTransitionTime":"2026-01-28T15:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.716048 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.716089 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.716100 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.716116 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.716126 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:04Z","lastTransitionTime":"2026-01-28T15:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.818512 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.818552 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.818564 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.818581 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.818591 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:04Z","lastTransitionTime":"2026-01-28T15:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.922050 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.922099 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.922109 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.922129 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:04 crc kubenswrapper[4981]: I0128 15:04:04.922142 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:04Z","lastTransitionTime":"2026-01-28T15:04:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.025365 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.025537 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.025605 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.025638 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.025660 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:05Z","lastTransitionTime":"2026-01-28T15:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.128930 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.128983 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.128997 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.129015 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.129028 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:05Z","lastTransitionTime":"2026-01-28T15:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.232554 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.232629 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.232652 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.232680 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.232701 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:05Z","lastTransitionTime":"2026-01-28T15:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.289487 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:34:22.18410834 +0000 UTC Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.318250 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.318375 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.318375 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.318397 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:05 crc kubenswrapper[4981]: E0128 15:04:05.318542 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:05 crc kubenswrapper[4981]: E0128 15:04:05.318667 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:05 crc kubenswrapper[4981]: E0128 15:04:05.319265 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:05 crc kubenswrapper[4981]: E0128 15:04:05.319418 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.319773 4981 scope.go:117] "RemoveContainer" containerID="25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.336143 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.336251 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.336272 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.336333 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.336358 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:05Z","lastTransitionTime":"2026-01-28T15:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.440437 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.440495 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.440513 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.440539 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.440559 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:05Z","lastTransitionTime":"2026-01-28T15:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.518343 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.518395 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.518406 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.518425 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.518439 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:05Z","lastTransitionTime":"2026-01-28T15:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:05 crc kubenswrapper[4981]: E0128 15:04:05.535379 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.540316 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.540372 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.540384 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.540403 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.540418 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:05Z","lastTransitionTime":"2026-01-28T15:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:05 crc kubenswrapper[4981]: E0128 15:04:05.559522 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.565568 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.565614 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.565629 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.565651 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.565667 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:05Z","lastTransitionTime":"2026-01-28T15:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:05 crc kubenswrapper[4981]: E0128 15:04:05.582836 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.587430 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.587484 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.587504 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.587532 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.587549 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:05Z","lastTransitionTime":"2026-01-28T15:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.591469 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs\") pod \"network-metrics-daemon-8rsts\" (UID: \"d5fda60c-a87b-4810-81df-4c7717d34ac1\") " pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:05 crc kubenswrapper[4981]: E0128 15:04:05.591657 4981 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:04:05 crc kubenswrapper[4981]: E0128 15:04:05.591762 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs podName:d5fda60c-a87b-4810-81df-4c7717d34ac1 nodeName:}" failed. No retries permitted until 2026-01-28 15:04:21.591738408 +0000 UTC m=+73.043896729 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs") pod "network-metrics-daemon-8rsts" (UID: "d5fda60c-a87b-4810-81df-4c7717d34ac1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:04:05 crc kubenswrapper[4981]: E0128 15:04:05.606559 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.611554 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.611743 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.611816 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.611881 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.611941 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:05Z","lastTransitionTime":"2026-01-28T15:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:05 crc kubenswrapper[4981]: E0128 15:04:05.625096 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:05Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:05 crc kubenswrapper[4981]: E0128 15:04:05.625520 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.627355 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.627452 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.627511 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.627569 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.627821 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:05Z","lastTransitionTime":"2026-01-28T15:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.734132 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.734174 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.734208 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.734225 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.734235 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:05Z","lastTransitionTime":"2026-01-28T15:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.837354 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.837406 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.837424 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.837448 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.837465 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:05Z","lastTransitionTime":"2026-01-28T15:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.940733 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.940804 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.940827 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.940859 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:05 crc kubenswrapper[4981]: I0128 15:04:05.940882 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:05Z","lastTransitionTime":"2026-01-28T15:04:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.004703 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovnkube-controller/1.log" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.008793 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerStarted","Data":"ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a"} Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.009471 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.035684 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.043824 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.043859 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.043868 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.043881 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.043889 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:06Z","lastTransitionTime":"2026-01-28T15:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.050140 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.060946 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.070890 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.087910 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:03:51Z\\\",\\\"message\\\":\\\"/factory.go:160\\\\nI0128 15:03:51.183497 6565 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.183846 6565 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.183947 6565 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.184344 6565 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:03:51.184379 6565 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:03:51.184406 6565 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:03:51.184416 6565 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:03:51.184442 6565 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:03:51.184483 6565 factory.go:656] Stopping watch factory\\\\nI0128 15:03:51.184508 6565 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:03:51.184520 6565 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:03:51.184528 6565 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:03:51.184536 6565 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:03:51.184543 6565 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.103899 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.115887 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.129648 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.141118 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.146643 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.146713 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.146729 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.146756 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.146774 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:06Z","lastTransitionTime":"2026-01-28T15:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.161727 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.174507 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.186737 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ab363e-7b23-41a2-b81a-f304940ea4e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401004bf4f52b13621d039da3ad10fa2800e605b8e574b16a9200f0447169a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdec11bcd96c80a3dcffa4a5da6e5541079caace1911ad9d3387310299c033b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44b16b8020efea12a40e946909e999169518fb90219b88c84df8eb2696b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.201216 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.215161 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.228128 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.243389 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.249157 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.249213 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.249225 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.249241 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.249253 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:06Z","lastTransitionTime":"2026-01-28T15:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.262709 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:06Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.290167 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 14:48:03.257287116 +0000 UTC Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.352409 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.352475 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.352493 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.352521 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.352538 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:06Z","lastTransitionTime":"2026-01-28T15:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.455273 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.455325 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.455339 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.455356 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.455368 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:06Z","lastTransitionTime":"2026-01-28T15:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.558812 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.558865 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.558877 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.558895 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.558904 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:06Z","lastTransitionTime":"2026-01-28T15:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.664571 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.664858 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.664875 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.664898 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.664914 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:06Z","lastTransitionTime":"2026-01-28T15:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.768121 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.768167 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.768180 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.768222 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.768234 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:06Z","lastTransitionTime":"2026-01-28T15:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.871020 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.871123 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.871135 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.871153 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.871166 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:06Z","lastTransitionTime":"2026-01-28T15:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.974395 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.974451 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.974463 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.974484 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:06 crc kubenswrapper[4981]: I0128 15:04:06.974498 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:06Z","lastTransitionTime":"2026-01-28T15:04:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.017799 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovnkube-controller/2.log" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.018967 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovnkube-controller/1.log" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.028542 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerID="ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a" exitCode=1 Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.028599 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerDied","Data":"ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a"} Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.028638 4981 scope.go:117] "RemoveContainer" containerID="25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.029497 4981 scope.go:117] "RemoveContainer" containerID="ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a" Jan 28 15:04:07 crc kubenswrapper[4981]: E0128 15:04:07.029713 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.052457 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.070515 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.076841 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.076896 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.076910 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.076929 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.076943 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:07Z","lastTransitionTime":"2026-01-28T15:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.086874 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.123244 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://25e10ec2465eb24df8fccb882edb74b3c325dcf656c14bcac1622889c25a9d5f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:03:51Z\\\",\\\"message\\\":\\\"/factory.go:160\\\\nI0128 15:03:51.183497 6565 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.183846 6565 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.183947 6565 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 15:03:51.184344 6565 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:03:51.184379 6565 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:03:51.184406 6565 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:03:51.184416 6565 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:03:51.184442 6565 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:03:51.184483 6565 factory.go:656] Stopping watch factory\\\\nI0128 15:03:51.184508 6565 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:03:51.184520 6565 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:03:51.184528 6565 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 15:03:51.184536 6565 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:03:51.184543 6565 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:06Z\\\",\\\"message\\\":\\\"enshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:04:06.428926 6733 services_controller.go:360] Finished syncing service cluster-version-operator on namespace openshift-cluster-version for network=default : 1.823706ms\\\\nF0128 15:04:06.428930 6733 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:04:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.145962 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.162067 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.174977 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.178793 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.178850 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.178868 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.178892 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.178910 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:07Z","lastTransitionTime":"2026-01-28T15:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.188742 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.204054 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.217706 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.231449 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ab363e-7b23-41a2-b81a-f304940ea4e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401004bf4f52b13621d039da3ad10fa2800e605b8e574b16a9200f0447169a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdec11bcd96c80a3dcffa4a5da6e5541079caace1911ad9d3387310299c033b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44b16b8020efea12a40e946909e999169518fb90219b88c84df8eb2696b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.251312 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.268772 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.282017 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.282088 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.282110 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.282141 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.282164 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:07Z","lastTransitionTime":"2026-01-28T15:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.284957 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.291213 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:55:09.880877974 +0000 UTC Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.306827 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.318147 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.318168 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:07 crc kubenswrapper[4981]: E0128 15:04:07.318263 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.318315 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:07 crc kubenswrapper[4981]: E0128 15:04:07.318452 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:07 crc kubenswrapper[4981]: E0128 15:04:07.318505 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.318538 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:07 crc kubenswrapper[4981]: E0128 15:04:07.318710 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.324305 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.344512 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:07Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.384501 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.384570 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.384593 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.384629 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.384654 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:07Z","lastTransitionTime":"2026-01-28T15:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.488238 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.488310 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.488332 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.488361 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.488385 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:07Z","lastTransitionTime":"2026-01-28T15:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.591245 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.591284 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.591292 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.591304 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.591314 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:07Z","lastTransitionTime":"2026-01-28T15:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.694044 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.694091 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.694102 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.694118 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.694130 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:07Z","lastTransitionTime":"2026-01-28T15:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.797062 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.797112 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.797125 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.797143 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.797155 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:07Z","lastTransitionTime":"2026-01-28T15:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.900506 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.900572 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.900597 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.900626 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:07 crc kubenswrapper[4981]: I0128 15:04:07.900673 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:07Z","lastTransitionTime":"2026-01-28T15:04:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.004027 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.004084 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.004101 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.004128 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.004144 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:08Z","lastTransitionTime":"2026-01-28T15:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.058247 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovnkube-controller/2.log" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.062947 4981 scope.go:117] "RemoveContainer" containerID="ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a" Jan 28 15:04:08 crc kubenswrapper[4981]: E0128 15:04:08.063178 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.078781 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.095320 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ab363e-7b23-41a2-b81a-f304940ea4e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401004bf4f52b13621d039da3ad10fa2800e605b8e574b16a9200f0447169a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdec11bcd96c80a3dcffa4a5da6e5541079caace1911ad9d3387310299c033b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44b16b8020efea12a40e946909e999169518fb90219b88c84df8eb2696b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.107170 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.107246 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.107258 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.107276 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.107289 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:08Z","lastTransitionTime":"2026-01-28T15:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.112857 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.127651 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.146777 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.165502 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.185414 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.200166 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.210172 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.210240 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.210252 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.210270 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.210284 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:08Z","lastTransitionTime":"2026-01-28T15:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.221993 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.239156 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.257820 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.287907 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:06Z\\\",\\\"message\\\":\\\"enshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:04:06.428926 6733 services_controller.go:360] Finished syncing service cluster-version-operator on namespace openshift-cluster-version for network=default : 1.823706ms\\\\nF0128 15:04:06.428930 6733 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.291369 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 12:20:27.933844069 +0000 UTC Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.307790 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.313252 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.313297 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.313311 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.313357 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.313369 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:08Z","lastTransitionTime":"2026-01-28T15:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.325114 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.343983 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.355076 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.367776 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:08Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.416526 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.416574 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.416586 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.416607 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.416621 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:08Z","lastTransitionTime":"2026-01-28T15:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.519388 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.519446 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.519462 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.519485 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.519502 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:08Z","lastTransitionTime":"2026-01-28T15:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.622946 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.623024 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.623044 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.623072 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.623093 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:08Z","lastTransitionTime":"2026-01-28T15:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.726146 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.726256 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.726281 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.726312 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.726335 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:08Z","lastTransitionTime":"2026-01-28T15:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.829276 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.829354 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.829378 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.829413 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.829431 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:08Z","lastTransitionTime":"2026-01-28T15:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.932772 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.932840 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.932852 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.932874 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:08 crc kubenswrapper[4981]: I0128 15:04:08.932885 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:08Z","lastTransitionTime":"2026-01-28T15:04:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.035595 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.035650 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.035665 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.035682 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.035695 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:09Z","lastTransitionTime":"2026-01-28T15:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.138283 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.138336 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.138347 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.138363 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.138376 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:09Z","lastTransitionTime":"2026-01-28T15:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.240976 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.241026 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.241042 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.241061 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.241075 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:09Z","lastTransitionTime":"2026-01-28T15:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.292598 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 08:06:05.429493074 +0000 UTC Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.317601 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.317639 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.317657 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.317680 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:09 crc kubenswrapper[4981]: E0128 15:04:09.317759 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:09 crc kubenswrapper[4981]: E0128 15:04:09.317821 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:09 crc kubenswrapper[4981]: E0128 15:04:09.317872 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:09 crc kubenswrapper[4981]: E0128 15:04:09.317951 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.334761 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ab363e-7b23-41a2-b81a-f304940ea4e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401004bf4f52b13621d039da3ad10fa2800e605b8e574b16a9200f0447169a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdec11bcd96c80a3dcffa4a5da6e5541079caace1911ad9d3387310299c033b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44b16b8020efea12a40e946909e999169518fb90219b88c84df8eb2696b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.344206 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.344406 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.344475 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.344582 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.344678 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:09Z","lastTransitionTime":"2026-01-28T15:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.346774 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.360332 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.380564 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.394261 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.413059 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.432355 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.448293 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.448363 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.448378 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.448403 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.448419 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:09Z","lastTransitionTime":"2026-01-28T15:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.450280 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.472783 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.488731 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.510699 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.527998 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.545855 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.550962 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.551132 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.551273 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.551411 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.551540 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:09Z","lastTransitionTime":"2026-01-28T15:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.565719 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.583067 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.600727 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.636182 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:06Z\\\",\\\"message\\\":\\\"enshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:04:06.428926 6733 services_controller.go:360] Finished syncing service cluster-version-operator on namespace openshift-cluster-version for network=default : 1.823706ms\\\\nF0128 15:04:06.428930 6733 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:09Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.654444 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.654863 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.654928 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.654962 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.654986 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:09Z","lastTransitionTime":"2026-01-28T15:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.758253 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.758666 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.758930 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.759135 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.759369 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:09Z","lastTransitionTime":"2026-01-28T15:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.862994 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.863438 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.863573 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.863722 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.863859 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:09Z","lastTransitionTime":"2026-01-28T15:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.966795 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.967251 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.967463 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.967628 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:09 crc kubenswrapper[4981]: I0128 15:04:09.967790 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:09Z","lastTransitionTime":"2026-01-28T15:04:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.070129 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.070224 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.070243 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.070268 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.070292 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:10Z","lastTransitionTime":"2026-01-28T15:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.173906 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.174351 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.174457 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.174549 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.174635 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:10Z","lastTransitionTime":"2026-01-28T15:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.277425 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.277478 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.277495 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.277520 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.277538 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:10Z","lastTransitionTime":"2026-01-28T15:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.293507 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:05:12.766793401 +0000 UTC Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.381027 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.381106 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.381131 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.381162 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.381217 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:10Z","lastTransitionTime":"2026-01-28T15:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.490711 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.490785 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.490796 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.490821 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.490840 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:10Z","lastTransitionTime":"2026-01-28T15:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.594863 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.594913 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.594931 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.594953 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.594970 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:10Z","lastTransitionTime":"2026-01-28T15:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.697461 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.697533 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.697556 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.697587 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.697612 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:10Z","lastTransitionTime":"2026-01-28T15:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.800535 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.800592 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.800602 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.800619 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.800630 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:10Z","lastTransitionTime":"2026-01-28T15:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.904637 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.904702 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.904720 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.904746 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:10 crc kubenswrapper[4981]: I0128 15:04:10.904767 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:10Z","lastTransitionTime":"2026-01-28T15:04:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.007523 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.007635 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.007663 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.007747 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.007818 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:11Z","lastTransitionTime":"2026-01-28T15:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.111257 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.111313 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.111328 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.111347 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.111360 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:11Z","lastTransitionTime":"2026-01-28T15:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.215005 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.215057 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.215075 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.215100 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.215117 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:11Z","lastTransitionTime":"2026-01-28T15:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.295691 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 11:07:20.294932513 +0000 UTC Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.318316 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.318357 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.318348 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:11 crc kubenswrapper[4981]: E0128 15:04:11.318485 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.318522 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:11 crc kubenswrapper[4981]: E0128 15:04:11.318610 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.318741 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.318774 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.318785 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.318800 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:11 crc kubenswrapper[4981]: E0128 15:04:11.318790 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.318811 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:11Z","lastTransitionTime":"2026-01-28T15:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:11 crc kubenswrapper[4981]: E0128 15:04:11.318913 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.422039 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.422085 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.422098 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.422122 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.422138 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:11Z","lastTransitionTime":"2026-01-28T15:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.525229 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.525288 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.525308 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.525333 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.525351 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:11Z","lastTransitionTime":"2026-01-28T15:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.631828 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.631899 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.631919 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.631945 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.631963 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:11Z","lastTransitionTime":"2026-01-28T15:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.738538 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.738602 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.738619 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.738641 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.738658 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:11Z","lastTransitionTime":"2026-01-28T15:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.841405 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.841692 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.841705 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.841724 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.841737 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:11Z","lastTransitionTime":"2026-01-28T15:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.945520 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.945589 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.945615 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.945646 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:11 crc kubenswrapper[4981]: I0128 15:04:11.945668 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:11Z","lastTransitionTime":"2026-01-28T15:04:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.049288 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.049343 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.049360 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.049381 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.049398 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:12Z","lastTransitionTime":"2026-01-28T15:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.152280 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.152343 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.152378 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.152421 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.152445 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:12Z","lastTransitionTime":"2026-01-28T15:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.254710 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.254779 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.254797 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.254823 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.254841 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:12Z","lastTransitionTime":"2026-01-28T15:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.296533 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 19:15:12.836296144 +0000 UTC Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.358724 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.358779 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.358797 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.358835 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.358857 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:12Z","lastTransitionTime":"2026-01-28T15:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.462012 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.462076 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.462092 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.462116 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.462133 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:12Z","lastTransitionTime":"2026-01-28T15:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.564683 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.564747 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.564764 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.564789 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.564807 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:12Z","lastTransitionTime":"2026-01-28T15:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.667349 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.667425 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.667442 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.667467 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.667484 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:12Z","lastTransitionTime":"2026-01-28T15:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.770675 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.770748 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.770767 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.770791 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.770808 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:12Z","lastTransitionTime":"2026-01-28T15:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.873337 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.873450 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.873474 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.873505 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.873525 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:12Z","lastTransitionTime":"2026-01-28T15:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.981642 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.981703 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.981721 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.981745 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:12 crc kubenswrapper[4981]: I0128 15:04:12.981762 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:12Z","lastTransitionTime":"2026-01-28T15:04:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.084412 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.084460 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.084476 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.084498 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.084514 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:13Z","lastTransitionTime":"2026-01-28T15:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.194780 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.194861 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.194876 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.194900 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.194914 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:13Z","lastTransitionTime":"2026-01-28T15:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.297150 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 13:49:23.584155457 +0000 UTC Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.298087 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.298127 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.298136 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.298166 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.298177 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:13Z","lastTransitionTime":"2026-01-28T15:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.318635 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.318706 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.318749 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:13 crc kubenswrapper[4981]: E0128 15:04:13.318797 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.318840 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:13 crc kubenswrapper[4981]: E0128 15:04:13.318987 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:13 crc kubenswrapper[4981]: E0128 15:04:13.319132 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:13 crc kubenswrapper[4981]: E0128 15:04:13.319331 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.403006 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.403042 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.403055 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.403077 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.403091 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:13Z","lastTransitionTime":"2026-01-28T15:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.506420 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.506448 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.506456 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.506470 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.506479 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:13Z","lastTransitionTime":"2026-01-28T15:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.609960 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.610010 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.610026 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.610050 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.610069 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:13Z","lastTransitionTime":"2026-01-28T15:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.712112 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.712170 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.712233 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.712257 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.712275 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:13Z","lastTransitionTime":"2026-01-28T15:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.815032 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.815083 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.815100 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.815126 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.815144 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:13Z","lastTransitionTime":"2026-01-28T15:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.917229 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.917274 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.917283 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.917295 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:13 crc kubenswrapper[4981]: I0128 15:04:13.917304 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:13Z","lastTransitionTime":"2026-01-28T15:04:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.019767 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.020009 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.020123 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.020244 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.020337 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:14Z","lastTransitionTime":"2026-01-28T15:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.122889 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.122939 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.122954 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.122974 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.122987 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:14Z","lastTransitionTime":"2026-01-28T15:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.225770 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.225823 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.225835 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.225854 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.225868 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:14Z","lastTransitionTime":"2026-01-28T15:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.298009 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 14:14:23.354908793 +0000 UTC Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.335826 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.335901 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.335914 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.335933 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.335968 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:14Z","lastTransitionTime":"2026-01-28T15:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.439251 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.439320 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.439332 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.439368 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.439382 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:14Z","lastTransitionTime":"2026-01-28T15:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.542328 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.542381 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.542397 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.542423 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.542445 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:14Z","lastTransitionTime":"2026-01-28T15:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.645302 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.645359 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.645375 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.645403 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.645420 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:14Z","lastTransitionTime":"2026-01-28T15:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.749571 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.749620 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.749631 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.749651 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.749665 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:14Z","lastTransitionTime":"2026-01-28T15:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.852454 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.852544 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.852561 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.852601 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.852621 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:14Z","lastTransitionTime":"2026-01-28T15:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.955677 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.955737 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.955759 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.955791 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:14 crc kubenswrapper[4981]: I0128 15:04:14.955813 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:14Z","lastTransitionTime":"2026-01-28T15:04:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.059153 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.059211 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.059226 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.059240 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.059250 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:15Z","lastTransitionTime":"2026-01-28T15:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.161760 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.161829 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.161842 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.161860 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.161872 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:15Z","lastTransitionTime":"2026-01-28T15:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.264745 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.264788 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.264799 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.264814 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.264824 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:15Z","lastTransitionTime":"2026-01-28T15:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.298151 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 02:09:23.592037266 +0000 UTC Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.317839 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.317946 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:15 crc kubenswrapper[4981]: E0128 15:04:15.318001 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:15 crc kubenswrapper[4981]: E0128 15:04:15.318170 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.318229 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:15 crc kubenswrapper[4981]: E0128 15:04:15.318372 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.318580 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:15 crc kubenswrapper[4981]: E0128 15:04:15.318730 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.367695 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.367756 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.367775 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.367800 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.367818 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:15Z","lastTransitionTime":"2026-01-28T15:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.470242 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.470574 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.470648 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.470720 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.470783 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:15Z","lastTransitionTime":"2026-01-28T15:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.573866 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.574135 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.574230 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.574324 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.574385 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:15Z","lastTransitionTime":"2026-01-28T15:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.676931 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.676992 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.677003 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.677021 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.677050 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:15Z","lastTransitionTime":"2026-01-28T15:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.779759 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.779807 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.779815 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.779829 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.779838 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:15Z","lastTransitionTime":"2026-01-28T15:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.882905 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.882969 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.882987 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.883012 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.883029 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:15Z","lastTransitionTime":"2026-01-28T15:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.932262 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.932402 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.932462 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.932538 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.932598 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:15Z","lastTransitionTime":"2026-01-28T15:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:15 crc kubenswrapper[4981]: E0128 15:04:15.946111 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.951752 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.951846 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.951917 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.951979 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.952047 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:15Z","lastTransitionTime":"2026-01-28T15:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:15 crc kubenswrapper[4981]: E0128 15:04:15.972451 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.977544 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.977637 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.977708 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.977774 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.977846 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:15Z","lastTransitionTime":"2026-01-28T15:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:15 crc kubenswrapper[4981]: E0128 15:04:15.990972 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:15Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.994498 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.994542 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.994554 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.994571 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:15 crc kubenswrapper[4981]: I0128 15:04:15.994583 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:15Z","lastTransitionTime":"2026-01-28T15:04:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:16 crc kubenswrapper[4981]: E0128 15:04:16.012904 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.016468 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.016583 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.016661 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.016766 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.016855 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:16Z","lastTransitionTime":"2026-01-28T15:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:16 crc kubenswrapper[4981]: E0128 15:04:16.037411 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:16Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:16 crc kubenswrapper[4981]: E0128 15:04:16.037675 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.039653 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.039757 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.039817 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.039890 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.039961 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:16Z","lastTransitionTime":"2026-01-28T15:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.143564 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.143911 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.144053 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.144237 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.144417 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:16Z","lastTransitionTime":"2026-01-28T15:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.247109 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.247173 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.247229 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.247259 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.247312 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:16Z","lastTransitionTime":"2026-01-28T15:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.298396 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 00:35:52.003105987 +0000 UTC Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.350172 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.350248 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.350262 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.350284 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.350301 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:16Z","lastTransitionTime":"2026-01-28T15:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.452631 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.452671 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.452682 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.452698 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.452710 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:16Z","lastTransitionTime":"2026-01-28T15:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.555355 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.555442 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.555456 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.555480 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.555493 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:16Z","lastTransitionTime":"2026-01-28T15:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.657425 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.657484 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.657500 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.657524 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.657541 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:16Z","lastTransitionTime":"2026-01-28T15:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.759972 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.760010 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.760021 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.760037 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.760051 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:16Z","lastTransitionTime":"2026-01-28T15:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.863278 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.863320 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.863331 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.863349 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.863360 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:16Z","lastTransitionTime":"2026-01-28T15:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.965247 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.965281 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.965291 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.965305 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:16 crc kubenswrapper[4981]: I0128 15:04:16.965315 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:16Z","lastTransitionTime":"2026-01-28T15:04:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.068933 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.068990 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.069001 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.069020 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.069037 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:17Z","lastTransitionTime":"2026-01-28T15:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.172005 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.172082 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.172104 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.172135 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.172159 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:17Z","lastTransitionTime":"2026-01-28T15:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.274265 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.274306 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.274314 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.274328 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.274338 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:17Z","lastTransitionTime":"2026-01-28T15:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.298602 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 15:44:41.396572453 +0000 UTC Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.317899 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.317919 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:17 crc kubenswrapper[4981]: E0128 15:04:17.318037 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.317935 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.317919 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:17 crc kubenswrapper[4981]: E0128 15:04:17.318164 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:17 crc kubenswrapper[4981]: E0128 15:04:17.318110 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:17 crc kubenswrapper[4981]: E0128 15:04:17.318430 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.380792 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.380852 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.380870 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.380894 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.380911 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:17Z","lastTransitionTime":"2026-01-28T15:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.484038 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.484095 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.484104 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.484118 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.484129 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:17Z","lastTransitionTime":"2026-01-28T15:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.587588 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.587638 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.587651 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.587670 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.587684 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:17Z","lastTransitionTime":"2026-01-28T15:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.691443 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.691498 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.691516 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.691539 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.691557 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:17Z","lastTransitionTime":"2026-01-28T15:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.794032 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.794123 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.794138 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.794201 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.794219 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:17Z","lastTransitionTime":"2026-01-28T15:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.896874 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.896945 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.896966 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.897064 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.897087 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:17Z","lastTransitionTime":"2026-01-28T15:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.999714 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.999755 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.999766 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.999784 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:17 crc kubenswrapper[4981]: I0128 15:04:17.999796 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:17Z","lastTransitionTime":"2026-01-28T15:04:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.102957 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.103003 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.103015 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.103035 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.103047 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:18Z","lastTransitionTime":"2026-01-28T15:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.207145 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.207220 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.207235 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.207255 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.207270 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:18Z","lastTransitionTime":"2026-01-28T15:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.299178 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 17:37:35.565412655 +0000 UTC Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.313486 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.313517 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.313525 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.313537 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.313546 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:18Z","lastTransitionTime":"2026-01-28T15:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.416526 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.416560 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.416569 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.416583 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.416593 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:18Z","lastTransitionTime":"2026-01-28T15:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.519063 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.519109 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.519120 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.519139 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.519154 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:18Z","lastTransitionTime":"2026-01-28T15:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.622106 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.622143 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.622154 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.622172 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.622202 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:18Z","lastTransitionTime":"2026-01-28T15:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.724403 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.724465 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.724481 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.724507 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.724525 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:18Z","lastTransitionTime":"2026-01-28T15:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.826711 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.826757 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.826767 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.826784 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.826796 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:18Z","lastTransitionTime":"2026-01-28T15:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.929487 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.929521 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.929529 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.929541 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:18 crc kubenswrapper[4981]: I0128 15:04:18.929552 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:18Z","lastTransitionTime":"2026-01-28T15:04:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.031531 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.031591 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.031601 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.031615 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.031625 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:19Z","lastTransitionTime":"2026-01-28T15:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.133705 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.133751 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.133762 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.133780 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.133793 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:19Z","lastTransitionTime":"2026-01-28T15:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.236657 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.236698 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.236716 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.236733 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.236744 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:19Z","lastTransitionTime":"2026-01-28T15:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.300388 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 14:12:17.913801822 +0000 UTC Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.317896 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.317956 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.317983 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.317931 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:19 crc kubenswrapper[4981]: E0128 15:04:19.318126 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:19 crc kubenswrapper[4981]: E0128 15:04:19.318255 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:19 crc kubenswrapper[4981]: E0128 15:04:19.318313 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:19 crc kubenswrapper[4981]: E0128 15:04:19.318369 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.334609 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.340915 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.340959 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.340973 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.340992 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.341007 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:19Z","lastTransitionTime":"2026-01-28T15:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.350327 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.363768 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.381797 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:06Z\\\",\\\"message\\\":\\\"enshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:04:06.428926 6733 services_controller.go:360] Finished syncing service cluster-version-operator on namespace openshift-cluster-version for network=default : 1.823706ms\\\\nF0128 15:04:06.428930 6733 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.396850 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.410706 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.424979 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.439583 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.444022 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.444069 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.444087 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.444114 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.444132 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:19Z","lastTransitionTime":"2026-01-28T15:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.454290 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.465783 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.477762 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ab363e-7b23-41a2-b81a-f304940ea4e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401004bf4f52b13621d039da3ad10fa2800e605b8e574b16a9200f0447169a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdec11bcd96c80a3dcffa4a5da6e5541079caace1911ad9d3387310299c033b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44b16b8020efea12a40e946909e999169518fb90219b88c84df8eb2696b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.490998 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.504346 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.515120 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.530232 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.545768 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.547311 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.547371 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.547382 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.547398 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.547409 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:19Z","lastTransitionTime":"2026-01-28T15:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.557769 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:19Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.649835 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.649861 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.649869 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.649882 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.649892 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:19Z","lastTransitionTime":"2026-01-28T15:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.752600 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.752642 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.752674 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.752691 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.752703 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:19Z","lastTransitionTime":"2026-01-28T15:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.855485 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.855537 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.855570 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.855590 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.855602 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:19Z","lastTransitionTime":"2026-01-28T15:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.957779 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.957834 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.957851 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.957877 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:19 crc kubenswrapper[4981]: I0128 15:04:19.957921 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:19Z","lastTransitionTime":"2026-01-28T15:04:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.060934 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.060991 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.061002 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.061016 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.061026 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:20Z","lastTransitionTime":"2026-01-28T15:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.163869 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.163924 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.163941 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.163963 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.163980 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:20Z","lastTransitionTime":"2026-01-28T15:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.266325 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.266361 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.266371 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.266388 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.266402 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:20Z","lastTransitionTime":"2026-01-28T15:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.301482 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 18:58:00.970610458 +0000 UTC Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.368722 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.368817 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.368838 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.368866 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.368883 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:20Z","lastTransitionTime":"2026-01-28T15:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.471401 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.471459 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.471481 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.471512 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.471526 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:20Z","lastTransitionTime":"2026-01-28T15:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.574074 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.574114 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.574124 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.574145 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.574155 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:20Z","lastTransitionTime":"2026-01-28T15:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.676278 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.676330 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.676348 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.676370 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.676393 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:20Z","lastTransitionTime":"2026-01-28T15:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.778885 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.778927 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.778938 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.778954 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.778967 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:20Z","lastTransitionTime":"2026-01-28T15:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.881174 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.881238 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.881250 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.881264 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.881274 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:20Z","lastTransitionTime":"2026-01-28T15:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.983687 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.983744 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.983756 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.983776 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:20 crc kubenswrapper[4981]: I0128 15:04:20.983787 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:20Z","lastTransitionTime":"2026-01-28T15:04:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.087688 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.087744 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.087762 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.087787 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.087803 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:21Z","lastTransitionTime":"2026-01-28T15:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.191070 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.191123 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.191136 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.191156 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.191173 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:21Z","lastTransitionTime":"2026-01-28T15:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.295321 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.295404 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.295428 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.295461 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.295480 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:21Z","lastTransitionTime":"2026-01-28T15:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.302511 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 17:57:43.643311939 +0000 UTC Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.318054 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:21 crc kubenswrapper[4981]: E0128 15:04:21.318341 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.318400 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.318555 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.318843 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:21 crc kubenswrapper[4981]: E0128 15:04:21.318836 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:21 crc kubenswrapper[4981]: E0128 15:04:21.318952 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:21 crc kubenswrapper[4981]: E0128 15:04:21.319040 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.319907 4981 scope.go:117] "RemoveContainer" containerID="ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a" Jan 28 15:04:21 crc kubenswrapper[4981]: E0128 15:04:21.320122 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.403644 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.403700 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.403715 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.403748 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.403763 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:21Z","lastTransitionTime":"2026-01-28T15:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.506790 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.506841 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.506863 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.506889 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.506912 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:21Z","lastTransitionTime":"2026-01-28T15:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.609292 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.609363 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.609379 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.609409 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.609427 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:21Z","lastTransitionTime":"2026-01-28T15:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.678887 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs\") pod \"network-metrics-daemon-8rsts\" (UID: \"d5fda60c-a87b-4810-81df-4c7717d34ac1\") " pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:21 crc kubenswrapper[4981]: E0128 15:04:21.679029 4981 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:04:21 crc kubenswrapper[4981]: E0128 15:04:21.679083 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs podName:d5fda60c-a87b-4810-81df-4c7717d34ac1 nodeName:}" failed. No retries permitted until 2026-01-28 15:04:53.679067757 +0000 UTC m=+105.131225998 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs") pod "network-metrics-daemon-8rsts" (UID: "d5fda60c-a87b-4810-81df-4c7717d34ac1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.712050 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.712137 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.712163 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.712238 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.712266 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:21Z","lastTransitionTime":"2026-01-28T15:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.815717 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.815772 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.815782 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.815806 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.815821 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:21Z","lastTransitionTime":"2026-01-28T15:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.918297 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.918354 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.918364 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.918389 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:21 crc kubenswrapper[4981]: I0128 15:04:21.918404 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:21Z","lastTransitionTime":"2026-01-28T15:04:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.022054 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.022145 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.022170 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.022286 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.022378 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:22Z","lastTransitionTime":"2026-01-28T15:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.125085 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.125168 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.125215 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.125292 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.125353 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:22Z","lastTransitionTime":"2026-01-28T15:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.228386 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.228449 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.228461 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.228485 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.228501 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:22Z","lastTransitionTime":"2026-01-28T15:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.303357 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 13:33:09.870711988 +0000 UTC Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.330886 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.330921 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.330931 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.330948 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.330959 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:22Z","lastTransitionTime":"2026-01-28T15:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.434519 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.434560 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.434570 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.434589 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.434600 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:22Z","lastTransitionTime":"2026-01-28T15:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.537835 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.537910 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.537929 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.537957 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.537977 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:22Z","lastTransitionTime":"2026-01-28T15:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.641368 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.641419 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.641434 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.641458 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.641472 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:22Z","lastTransitionTime":"2026-01-28T15:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.744840 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.744896 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.744907 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.744921 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.744948 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:22Z","lastTransitionTime":"2026-01-28T15:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.847505 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.847562 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.847581 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.847604 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.847621 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:22Z","lastTransitionTime":"2026-01-28T15:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.950704 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.950737 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.950745 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.950759 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:22 crc kubenswrapper[4981]: I0128 15:04:22.950769 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:22Z","lastTransitionTime":"2026-01-28T15:04:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.054350 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.054404 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.054417 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.054438 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.054451 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:23Z","lastTransitionTime":"2026-01-28T15:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.111274 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lwvh4_3cd6b29e-682c-4aec-b039-70d6d75cbcbc/kube-multus/0.log" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.111318 4981 generic.go:334] "Generic (PLEG): container finished" podID="3cd6b29e-682c-4aec-b039-70d6d75cbcbc" containerID="1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c" exitCode=1 Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.111346 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lwvh4" event={"ID":"3cd6b29e-682c-4aec-b039-70d6d75cbcbc","Type":"ContainerDied","Data":"1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c"} Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.111717 4981 scope.go:117] "RemoveContainer" containerID="1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.129129 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.145873 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.158672 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.158706 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.158714 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.158727 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.158735 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:23Z","lastTransitionTime":"2026-01-28T15:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.173932 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.185295 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.198767 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.228101 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:06Z\\\",\\\"message\\\":\\\"enshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:04:06.428926 6733 services_controller.go:360] Finished syncing service cluster-version-operator on namespace openshift-cluster-version for network=default : 1.823706ms\\\\nF0128 15:04:06.428930 6733 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.261842 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.261878 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.261888 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.261906 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.261918 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:23Z","lastTransitionTime":"2026-01-28T15:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.281726 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.304215 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 04:35:26.661490625 +0000 UTC Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.305518 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ab363e-7b23-41a2-b81a-f304940ea4e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401004bf4f52b13621d039da3ad10fa2800e605b8e574b16a9200f0447169a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdec11bcd96c80a3dcffa4a5da6e5541079caace1911ad9d3387310299c033b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44b16b8020efea12a40e946909e999169518fb90219b88c84df8eb2696b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.318336 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.318386 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:23 crc kubenswrapper[4981]: E0128 15:04:23.318492 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.318557 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.318586 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:23 crc kubenswrapper[4981]: E0128 15:04:23.318688 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:23 crc kubenswrapper[4981]: E0128 15:04:23.319009 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:23 crc kubenswrapper[4981]: E0128 15:04:23.319060 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.320967 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.333562 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.350700 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.362249 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.364484 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.364550 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.364582 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.364599 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.364613 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:23Z","lastTransitionTime":"2026-01-28T15:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.377083 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.395526 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.412557 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:22Z\\\",\\\"message\\\":\\\"2026-01-28T15:03:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bafb8ab3-bd35-4172-a5e2-f1f5fce1ca97\\\\n2026-01-28T15:03:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bafb8ab3-bd35-4172-a5e2-f1f5fce1ca97 to /host/opt/cni/bin/\\\\n2026-01-28T15:03:37Z [verbose] multus-daemon started\\\\n2026-01-28T15:03:37Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:04:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.426552 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.442522 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:23Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.467378 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.467427 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.467440 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.467457 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.467470 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:23Z","lastTransitionTime":"2026-01-28T15:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.570095 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.570150 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.570165 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.570213 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.570233 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:23Z","lastTransitionTime":"2026-01-28T15:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.673148 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.673563 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.673647 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.673732 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.673802 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:23Z","lastTransitionTime":"2026-01-28T15:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.776915 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.776953 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.776964 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.776979 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.776991 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:23Z","lastTransitionTime":"2026-01-28T15:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.879079 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.879354 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.879418 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.879477 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.879542 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:23Z","lastTransitionTime":"2026-01-28T15:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.982067 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.982135 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.982160 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.982221 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:23 crc kubenswrapper[4981]: I0128 15:04:23.982245 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:23Z","lastTransitionTime":"2026-01-28T15:04:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.084622 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.084690 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.084723 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.084742 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.084754 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:24Z","lastTransitionTime":"2026-01-28T15:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.116537 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lwvh4_3cd6b29e-682c-4aec-b039-70d6d75cbcbc/kube-multus/0.log" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.116600 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lwvh4" event={"ID":"3cd6b29e-682c-4aec-b039-70d6d75cbcbc","Type":"ContainerStarted","Data":"e787c9c633e01ce0e62e64cb5468c84dcf7452433437f827989301a9ef122368"} Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.134268 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.153664 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.169936 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.185605 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.187381 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.187424 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.187500 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.187522 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.187576 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:24Z","lastTransitionTime":"2026-01-28T15:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.213509 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:06Z\\\",\\\"message\\\":\\\"enshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:04:06.428926 6733 services_controller.go:360] Finished syncing service cluster-version-operator on namespace openshift-cluster-version for network=default : 1.823706ms\\\\nF0128 15:04:06.428930 6733 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.229818 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.250681 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.270662 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.289082 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.290618 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.290663 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.290676 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.290695 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.290741 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:24Z","lastTransitionTime":"2026-01-28T15:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.304384 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 15:21:11.521163744 +0000 UTC Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.314524 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.329311 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.345862 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ab363e-7b23-41a2-b81a-f304940ea4e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401004bf4f52b13621d039da3ad10fa2800e605b8e574b16a9200f0447169a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdec11bcd96c80a3dcffa4a5da6e5541079caace1911ad9d3387310299c033b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44b16b8020efea12a40e946909e999169518fb90219b88c84df8eb2696b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.366436 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.383828 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e787c9c633e01ce0e62e64cb5468c84dcf7452433437f827989301a9ef122368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:22Z\\\",\\\"message\\\":\\\"2026-01-28T15:03:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bafb8ab3-bd35-4172-a5e2-f1f5fce1ca97\\\\n2026-01-28T15:03:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bafb8ab3-bd35-4172-a5e2-f1f5fce1ca97 to /host/opt/cni/bin/\\\\n2026-01-28T15:03:37Z [verbose] multus-daemon started\\\\n2026-01-28T15:03:37Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:04:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.393308 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.393371 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.393389 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.393409 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.393426 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:24Z","lastTransitionTime":"2026-01-28T15:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.401627 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.424828 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.438174 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:24Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.496911 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.496971 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.496985 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.497006 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.497016 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:24Z","lastTransitionTime":"2026-01-28T15:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.599252 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.599302 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.599315 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.599333 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.599345 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:24Z","lastTransitionTime":"2026-01-28T15:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.701745 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.702020 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.702085 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.702160 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.702307 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:24Z","lastTransitionTime":"2026-01-28T15:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.804652 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.804690 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.804701 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.804719 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.804731 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:24Z","lastTransitionTime":"2026-01-28T15:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.908075 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.908129 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.908144 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.908169 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:24 crc kubenswrapper[4981]: I0128 15:04:24.908251 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:24Z","lastTransitionTime":"2026-01-28T15:04:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.010961 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.010997 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.011007 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.011023 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.011034 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:25Z","lastTransitionTime":"2026-01-28T15:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.113575 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.113612 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.113622 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.113637 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.113647 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:25Z","lastTransitionTime":"2026-01-28T15:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.215992 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.216026 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.216037 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.216050 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.216058 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:25Z","lastTransitionTime":"2026-01-28T15:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.305015 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 14:28:55.567633997 +0000 UTC Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.317786 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.317866 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.318244 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:25 crc kubenswrapper[4981]: E0128 15:04:25.318384 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.318605 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:25 crc kubenswrapper[4981]: E0128 15:04:25.318606 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:25 crc kubenswrapper[4981]: E0128 15:04:25.318686 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:25 crc kubenswrapper[4981]: E0128 15:04:25.319262 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.319675 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.319702 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.319709 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.319721 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.319730 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:25Z","lastTransitionTime":"2026-01-28T15:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.336872 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.421975 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.422023 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.422038 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.422061 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.422076 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:25Z","lastTransitionTime":"2026-01-28T15:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.524546 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.524594 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.524608 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.524623 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.524634 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:25Z","lastTransitionTime":"2026-01-28T15:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.627271 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.627331 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.627348 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.627372 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.627390 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:25Z","lastTransitionTime":"2026-01-28T15:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.730690 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.730755 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.730779 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.730811 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.730835 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:25Z","lastTransitionTime":"2026-01-28T15:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.834372 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.834437 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.834454 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.834478 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.834498 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:25Z","lastTransitionTime":"2026-01-28T15:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.937587 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.937634 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.937650 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.937669 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:25 crc kubenswrapper[4981]: I0128 15:04:25.937682 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:25Z","lastTransitionTime":"2026-01-28T15:04:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.040509 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.040556 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.040565 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.040581 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.040597 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:26Z","lastTransitionTime":"2026-01-28T15:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.142856 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.142932 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.142959 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.142988 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.143009 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:26Z","lastTransitionTime":"2026-01-28T15:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.245713 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.245747 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.245757 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.245772 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.245784 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:26Z","lastTransitionTime":"2026-01-28T15:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.256980 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.257023 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.257035 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.257056 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.257069 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:26Z","lastTransitionTime":"2026-01-28T15:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:26 crc kubenswrapper[4981]: E0128 15:04:26.277662 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.283905 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.283993 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.284022 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.284049 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.284066 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:26Z","lastTransitionTime":"2026-01-28T15:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.305339 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:10:45.044499706 +0000 UTC Jan 28 15:04:26 crc kubenswrapper[4981]: E0128 15:04:26.314730 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.320237 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.320296 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.320314 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.320340 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.320358 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:26Z","lastTransitionTime":"2026-01-28T15:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:26 crc kubenswrapper[4981]: E0128 15:04:26.345461 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.351520 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.351580 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.351598 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.351622 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.351640 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:26Z","lastTransitionTime":"2026-01-28T15:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:26 crc kubenswrapper[4981]: E0128 15:04:26.365746 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.371967 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.372031 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.372049 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.372074 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.372092 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:26Z","lastTransitionTime":"2026-01-28T15:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:26 crc kubenswrapper[4981]: E0128 15:04:26.396690 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:26Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:26 crc kubenswrapper[4981]: E0128 15:04:26.396927 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.398743 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.398786 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.398807 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.398836 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.398858 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:26Z","lastTransitionTime":"2026-01-28T15:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.502061 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.502111 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.502124 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.502142 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.502154 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:26Z","lastTransitionTime":"2026-01-28T15:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.604749 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.604793 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.604804 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.604825 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.604836 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:26Z","lastTransitionTime":"2026-01-28T15:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.708623 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.708695 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.708718 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.708750 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.708780 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:26Z","lastTransitionTime":"2026-01-28T15:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.811548 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.811582 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.811590 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.811604 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.811615 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:26Z","lastTransitionTime":"2026-01-28T15:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.914479 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.914539 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.914555 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.914575 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:26 crc kubenswrapper[4981]: I0128 15:04:26.914589 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:26Z","lastTransitionTime":"2026-01-28T15:04:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.017967 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.018028 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.018047 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.018073 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.018094 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:27Z","lastTransitionTime":"2026-01-28T15:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.120141 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.120234 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.120252 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.120277 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.120296 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:27Z","lastTransitionTime":"2026-01-28T15:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.222691 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.222740 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.222752 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.222771 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.222787 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:27Z","lastTransitionTime":"2026-01-28T15:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.306558 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 14:14:58.94335312 +0000 UTC Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.317924 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.317934 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.318034 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:27 crc kubenswrapper[4981]: E0128 15:04:27.318156 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.318250 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:27 crc kubenswrapper[4981]: E0128 15:04:27.318379 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:27 crc kubenswrapper[4981]: E0128 15:04:27.318463 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:27 crc kubenswrapper[4981]: E0128 15:04:27.318487 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.324898 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.324938 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.324952 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.324970 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.324985 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:27Z","lastTransitionTime":"2026-01-28T15:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.427634 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.427694 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.427709 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.427734 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.427772 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:27Z","lastTransitionTime":"2026-01-28T15:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.530751 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.530821 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.530839 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.530862 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.530880 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:27Z","lastTransitionTime":"2026-01-28T15:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.633580 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.633632 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.633648 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.633671 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.633688 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:27Z","lastTransitionTime":"2026-01-28T15:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.735894 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.735943 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.735960 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.736298 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.736316 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:27Z","lastTransitionTime":"2026-01-28T15:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.839083 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.839130 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.839142 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.839159 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.839172 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:27Z","lastTransitionTime":"2026-01-28T15:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.942438 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.942481 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.942489 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.942506 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:27 crc kubenswrapper[4981]: I0128 15:04:27.942517 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:27Z","lastTransitionTime":"2026-01-28T15:04:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.045718 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.045789 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.045810 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.045834 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.045849 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:28Z","lastTransitionTime":"2026-01-28T15:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.149069 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.149118 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.149136 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.149153 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.149165 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:28Z","lastTransitionTime":"2026-01-28T15:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.251784 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.251816 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.251825 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.251839 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.251849 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:28Z","lastTransitionTime":"2026-01-28T15:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.307563 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 05:17:47.257304633 +0000 UTC Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.354113 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.354161 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.354171 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.354202 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.354212 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:28Z","lastTransitionTime":"2026-01-28T15:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.457071 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.457115 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.457128 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.457144 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.457155 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:28Z","lastTransitionTime":"2026-01-28T15:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.560341 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.560416 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.560627 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.560660 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.560683 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:28Z","lastTransitionTime":"2026-01-28T15:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.664357 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.664437 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.664457 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.664489 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.664514 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:28Z","lastTransitionTime":"2026-01-28T15:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.768370 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.768440 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.768458 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.768485 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.768504 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:28Z","lastTransitionTime":"2026-01-28T15:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.872857 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.872913 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.872924 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.872949 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.872963 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:28Z","lastTransitionTime":"2026-01-28T15:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.976288 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.976368 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.976389 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.976425 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:28 crc kubenswrapper[4981]: I0128 15:04:28.976454 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:28Z","lastTransitionTime":"2026-01-28T15:04:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.078748 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.078813 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.078836 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.078868 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.078891 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:29Z","lastTransitionTime":"2026-01-28T15:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.181885 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.181948 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.181972 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.182003 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.182024 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:29Z","lastTransitionTime":"2026-01-28T15:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.285084 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.285153 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.285172 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.285237 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.285263 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:29Z","lastTransitionTime":"2026-01-28T15:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.307755 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 00:01:33.529010314 +0000 UTC Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.318163 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.318242 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.318264 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:29 crc kubenswrapper[4981]: E0128 15:04:29.318416 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:29 crc kubenswrapper[4981]: E0128 15:04:29.318506 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.318500 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:29 crc kubenswrapper[4981]: E0128 15:04:29.318707 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:29 crc kubenswrapper[4981]: E0128 15:04:29.318888 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.333637 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.354886 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.376908 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.388739 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.388791 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.388805 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.388826 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.388838 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:29Z","lastTransitionTime":"2026-01-28T15:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.396065 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e787c9c633e01ce0e62e64cb5468c84dcf7452433437f827989301a9ef122368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:22Z\\\",\\\"message\\\":\\\"2026-01-28T15:03:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bafb8ab3-bd35-4172-a5e2-f1f5fce1ca97\\\\n2026-01-28T15:03:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bafb8ab3-bd35-4172-a5e2-f1f5fce1ca97 to /host/opt/cni/bin/\\\\n2026-01-28T15:03:37Z [verbose] multus-daemon started\\\\n2026-01-28T15:03:37Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:04:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.411217 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.427725 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2120f6-91ca-46f8-a729-d91d715e85d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc14b3cf4e388495b2e92f6b68c6f252a0896d6d92fc7bf6786b0ae938e8ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b4ad4c73d139cb2ab8966a0ebfe6edf1642de2069cbe4f080d209792127e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b4ad4c73d139cb2ab8966a0ebfe6edf1642de2069cbe4f080d209792127e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.451258 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.472899 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.491509 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.493613 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.493658 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.493671 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.493689 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.493702 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:29Z","lastTransitionTime":"2026-01-28T15:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.508315 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.526954 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.539920 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.575231 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:06Z\\\",\\\"message\\\":\\\"enshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:04:06.428926 6733 services_controller.go:360] Finished syncing service cluster-version-operator on namespace openshift-cluster-version for network=default : 1.823706ms\\\\nF0128 15:04:06.428930 6733 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.596740 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.596779 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.596790 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.596809 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.596822 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:29Z","lastTransitionTime":"2026-01-28T15:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.604871 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ab363e-7b23-41a2-b81a-f304940ea4e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401004bf4f52b13621d039da3ad10fa2800e605b8e574b16a9200f0447169a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdec11bcd96c80a3dcffa4a5da6e5541079caace1911ad9d3387310299c033b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44b16b8020efea12a40e946909e999169518fb90219b88c84df8eb2696b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.619922 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.634861 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.655430 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.669590 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:29Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.700585 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.700622 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.700633 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.700649 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.700664 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:29Z","lastTransitionTime":"2026-01-28T15:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.803772 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.803843 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.803861 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.803888 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.803938 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:29Z","lastTransitionTime":"2026-01-28T15:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.907097 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.907162 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.907180 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.907240 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:29 crc kubenswrapper[4981]: I0128 15:04:29.907259 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:29Z","lastTransitionTime":"2026-01-28T15:04:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.010813 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.010872 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.010890 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.010913 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.010931 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:30Z","lastTransitionTime":"2026-01-28T15:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.114152 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.114231 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.114250 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.114273 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.114293 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:30Z","lastTransitionTime":"2026-01-28T15:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.217095 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.217143 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.217161 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.217183 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.217235 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:30Z","lastTransitionTime":"2026-01-28T15:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.308430 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 19:01:57.42915587 +0000 UTC Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.320368 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.320488 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.320515 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.320545 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.320629 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:30Z","lastTransitionTime":"2026-01-28T15:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.423951 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.424010 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.424022 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.424060 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.424116 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:30Z","lastTransitionTime":"2026-01-28T15:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.527231 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.527265 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.527301 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.527322 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.527334 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:30Z","lastTransitionTime":"2026-01-28T15:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.630332 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.630399 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.630416 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.630441 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.630459 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:30Z","lastTransitionTime":"2026-01-28T15:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.734918 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.734989 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.735009 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.735037 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.735056 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:30Z","lastTransitionTime":"2026-01-28T15:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.838946 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.839003 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.839017 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.839035 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.839045 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:30Z","lastTransitionTime":"2026-01-28T15:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.942036 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.942094 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.942111 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.942136 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:30 crc kubenswrapper[4981]: I0128 15:04:30.942155 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:30Z","lastTransitionTime":"2026-01-28T15:04:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.046043 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.046111 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.046135 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.046166 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.046221 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:31Z","lastTransitionTime":"2026-01-28T15:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.148370 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.148418 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.148430 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.148449 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.148462 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:31Z","lastTransitionTime":"2026-01-28T15:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.251654 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.251700 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.251712 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.251728 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.251746 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:31Z","lastTransitionTime":"2026-01-28T15:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.309540 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 11:26:10.295510159 +0000 UTC Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.318158 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.318343 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.318264 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:31 crc kubenswrapper[4981]: E0128 15:04:31.318428 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.318271 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:31 crc kubenswrapper[4981]: E0128 15:04:31.318559 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:31 crc kubenswrapper[4981]: E0128 15:04:31.318708 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:31 crc kubenswrapper[4981]: E0128 15:04:31.318843 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.355552 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.355650 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.355677 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.355712 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.355739 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:31Z","lastTransitionTime":"2026-01-28T15:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.457761 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.457793 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.457800 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.457813 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.457822 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:31Z","lastTransitionTime":"2026-01-28T15:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.561020 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.561064 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.561076 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.561092 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.561103 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:31Z","lastTransitionTime":"2026-01-28T15:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.664052 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.664086 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.664096 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.664113 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.664123 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:31Z","lastTransitionTime":"2026-01-28T15:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.766986 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.767041 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.767063 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.767091 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.767115 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:31Z","lastTransitionTime":"2026-01-28T15:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.870672 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.870743 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.870767 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.870796 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.870819 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:31Z","lastTransitionTime":"2026-01-28T15:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.972966 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.973006 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.973016 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.973036 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:31 crc kubenswrapper[4981]: I0128 15:04:31.973051 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:31Z","lastTransitionTime":"2026-01-28T15:04:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.075659 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.075705 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.075717 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.075739 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.075752 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:32Z","lastTransitionTime":"2026-01-28T15:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.179108 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.179205 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.179219 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.179239 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.179252 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:32Z","lastTransitionTime":"2026-01-28T15:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.283154 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.283256 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.283276 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.283305 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.283328 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:32Z","lastTransitionTime":"2026-01-28T15:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.310056 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:22:12.602068719 +0000 UTC Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.386016 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.386108 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.386129 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.386166 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.386220 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:32Z","lastTransitionTime":"2026-01-28T15:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.489473 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.489518 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.489531 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.489550 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.489564 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:32Z","lastTransitionTime":"2026-01-28T15:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.593336 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.593397 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.593410 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.593431 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.593445 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:32Z","lastTransitionTime":"2026-01-28T15:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.697164 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.697235 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.697249 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.697290 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.697302 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:32Z","lastTransitionTime":"2026-01-28T15:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.800263 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.800333 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.800352 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.800378 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.800398 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:32Z","lastTransitionTime":"2026-01-28T15:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.903992 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.904046 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.904058 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.904076 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:32 crc kubenswrapper[4981]: I0128 15:04:32.904086 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:32Z","lastTransitionTime":"2026-01-28T15:04:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.006529 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.006593 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.006612 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.006636 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.006654 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:33Z","lastTransitionTime":"2026-01-28T15:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.109584 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.109640 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.109666 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.109689 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.109705 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:33Z","lastTransitionTime":"2026-01-28T15:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.212090 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.212169 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.212248 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.212281 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.212307 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:33Z","lastTransitionTime":"2026-01-28T15:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.311092 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 18:57:57.286500182 +0000 UTC Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.314972 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.315017 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.315030 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.315050 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.315064 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:33Z","lastTransitionTime":"2026-01-28T15:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.318645 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.318726 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.318765 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.318846 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.318904 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.318995 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.319100 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.319151 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.412659 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.412760 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.412796 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.412871 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:37.412842123 +0000 UTC m=+148.865000404 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.412873 4981 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.412951 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:05:37.412938345 +0000 UTC m=+148.865096616 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.413011 4981 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.413139 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 15:05:37.4131143 +0000 UTC m=+148.865272621 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.418105 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.418161 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.418178 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.418231 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.418249 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:33Z","lastTransitionTime":"2026-01-28T15:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.513680 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.513774 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.513872 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.513899 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.513912 4981 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.513944 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.513967 4981 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.513986 4981 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.513968 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 15:05:37.513950546 +0000 UTC m=+148.966108787 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:04:33 crc kubenswrapper[4981]: E0128 15:04:33.514050 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 15:05:37.514033388 +0000 UTC m=+148.966191669 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.520285 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.520322 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.520333 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.520354 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.520365 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:33Z","lastTransitionTime":"2026-01-28T15:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.622656 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.622791 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.622895 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.622930 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.622961 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:33Z","lastTransitionTime":"2026-01-28T15:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.726223 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.726283 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.726297 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.726315 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.726327 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:33Z","lastTransitionTime":"2026-01-28T15:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.829575 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.829696 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.829715 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.829740 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.829758 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:33Z","lastTransitionTime":"2026-01-28T15:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.932799 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.932852 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.932865 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.932880 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:33 crc kubenswrapper[4981]: I0128 15:04:33.932890 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:33Z","lastTransitionTime":"2026-01-28T15:04:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.035901 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.036009 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.036029 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.036054 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.036106 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:34Z","lastTransitionTime":"2026-01-28T15:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.142370 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.142445 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.142465 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.142496 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.142525 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:34Z","lastTransitionTime":"2026-01-28T15:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.246311 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.246384 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.246403 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.246427 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.246445 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:34Z","lastTransitionTime":"2026-01-28T15:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.312283 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 06:57:46.405938379 +0000 UTC Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.349526 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.349561 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.349570 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.349588 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.349598 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:34Z","lastTransitionTime":"2026-01-28T15:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.451964 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.452018 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.452035 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.452059 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.452077 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:34Z","lastTransitionTime":"2026-01-28T15:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.555724 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.555776 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.555786 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.555814 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.555828 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:34Z","lastTransitionTime":"2026-01-28T15:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.658554 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.658595 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.658606 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.658624 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.658633 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:34Z","lastTransitionTime":"2026-01-28T15:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.761497 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.761556 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.761574 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.761601 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.761618 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:34Z","lastTransitionTime":"2026-01-28T15:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.865230 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.865305 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.865330 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.865363 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.865387 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:34Z","lastTransitionTime":"2026-01-28T15:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.967819 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.967866 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.968060 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.968076 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:34 crc kubenswrapper[4981]: I0128 15:04:34.968087 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:34Z","lastTransitionTime":"2026-01-28T15:04:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.070862 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.070941 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.070958 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.070989 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.071009 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:35Z","lastTransitionTime":"2026-01-28T15:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.174753 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.174852 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.174872 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.175081 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.175099 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:35Z","lastTransitionTime":"2026-01-28T15:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.278807 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.278869 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.278882 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.278907 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.278920 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:35Z","lastTransitionTime":"2026-01-28T15:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.312498 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:30:29.081413255 +0000 UTC Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.317931 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.318019 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.318136 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:35 crc kubenswrapper[4981]: E0128 15:04:35.318235 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.318287 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:35 crc kubenswrapper[4981]: E0128 15:04:35.318490 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:35 crc kubenswrapper[4981]: E0128 15:04:35.318580 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:35 crc kubenswrapper[4981]: E0128 15:04:35.318878 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.319143 4981 scope.go:117] "RemoveContainer" containerID="ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.384245 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.384626 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.384715 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.384790 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.384874 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:35Z","lastTransitionTime":"2026-01-28T15:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.488674 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.488986 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.488994 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.489009 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.489020 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:35Z","lastTransitionTime":"2026-01-28T15:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.592330 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.592404 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.592418 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.592448 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.592463 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:35Z","lastTransitionTime":"2026-01-28T15:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.695133 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.695207 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.695219 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.695243 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.695258 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:35Z","lastTransitionTime":"2026-01-28T15:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.798073 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.798119 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.798132 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.798150 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.798164 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:35Z","lastTransitionTime":"2026-01-28T15:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.900327 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.900380 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.900393 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.900413 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:35 crc kubenswrapper[4981]: I0128 15:04:35.900426 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:35Z","lastTransitionTime":"2026-01-28T15:04:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.002978 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.003028 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.003040 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.003057 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.003069 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:36Z","lastTransitionTime":"2026-01-28T15:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.105660 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.105705 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.105716 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.105732 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.105742 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:36Z","lastTransitionTime":"2026-01-28T15:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.160891 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovnkube-controller/2.log" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.170451 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerStarted","Data":"8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d"} Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.171340 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.193334 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2120f6-91ca-46f8-a729-d91d715e85d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc14b3cf4e388495b2e92f6b68c6f252a0896d6d92fc7bf6786b0ae938e8ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b4ad4c73d139cb2ab8966a0ebfe6edf1642de2069cbe4f080d209792127e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b4ad4c73d139cb2ab8966a0ebfe6edf1642de2069cbe4f080d209792127e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.208121 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.208223 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.208249 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.208277 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.208313 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:36Z","lastTransitionTime":"2026-01-28T15:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.220500 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.247538 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:06Z\\\",\\\"message\\\":\\\"enshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:04:06.428926 6733 services_controller.go:360] Finished syncing service cluster-version-operator on namespace openshift-cluster-version for network=default : 1.823706ms\\\\nF0128 15:04:06.428930 6733 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.261815 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.278792 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.295071 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.311987 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.312036 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.312048 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.312066 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.312079 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:36Z","lastTransitionTime":"2026-01-28T15:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.312583 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.312653 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 22:47:42.711233118 +0000 UTC Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.323724 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.337486 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.350526 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ab363e-7b23-41a2-b81a-f304940ea4e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401004bf4f52b13621d039da3ad10fa2800e605b8e574b16a9200f0447169a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdec11bcd96c80a3dcffa4a5da6e5541079caace1911ad9d3387310299c033b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44b16b8020efea12a40e946909e999169518fb90219b88c84df8eb2696b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.365831 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.381655 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.398397 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.414936 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.415005 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.415022 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.415050 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.415068 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:36Z","lastTransitionTime":"2026-01-28T15:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.421908 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.433906 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.433962 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.433976 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.433998 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.434015 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:36Z","lastTransitionTime":"2026-01-28T15:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.445339 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: E0128 15:04:36.458003 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.463619 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.463708 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.463735 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.463769 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.463795 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:36Z","lastTransitionTime":"2026-01-28T15:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.469757 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: E0128 15:04:36.482634 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.487113 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e787c9c633e01ce0e62e64cb5468c84dcf7452433437f827989301a9ef122368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:22Z\\\",\\\"message\\\":\\\"2026-01-28T15:03:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bafb8ab3-bd35-4172-a5e2-f1f5fce1ca97\\\\n2026-01-28T15:03:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bafb8ab3-bd35-4172-a5e2-f1f5fce1ca97 to /host/opt/cni/bin/\\\\n2026-01-28T15:03:37Z [verbose] multus-daemon started\\\\n2026-01-28T15:03:37Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:04:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.487288 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.487353 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.487363 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.487383 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.487396 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:36Z","lastTransitionTime":"2026-01-28T15:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:36 crc kubenswrapper[4981]: E0128 15:04:36.498242 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.502365 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.502422 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.502476 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.502503 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.502522 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:36Z","lastTransitionTime":"2026-01-28T15:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.503857 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: E0128 15:04:36.520903 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.525083 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.525133 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.525148 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.525172 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.525213 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:36Z","lastTransitionTime":"2026-01-28T15:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:36 crc kubenswrapper[4981]: E0128 15:04:36.547222 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:36Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:36 crc kubenswrapper[4981]: E0128 15:04:36.547349 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.549962 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.550021 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.550037 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.550063 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.550079 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:36Z","lastTransitionTime":"2026-01-28T15:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.653689 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.653730 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.653741 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.653759 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.653770 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:36Z","lastTransitionTime":"2026-01-28T15:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.756653 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.756701 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.756714 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.756732 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.756745 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:36Z","lastTransitionTime":"2026-01-28T15:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.858458 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.858498 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.858507 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.858522 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.858532 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:36Z","lastTransitionTime":"2026-01-28T15:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.960807 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.960935 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.960955 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.960996 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:36 crc kubenswrapper[4981]: I0128 15:04:36.961027 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:36Z","lastTransitionTime":"2026-01-28T15:04:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.065026 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.065104 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.065127 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.065157 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.065181 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:37Z","lastTransitionTime":"2026-01-28T15:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.168727 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.168790 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.168805 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.168831 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.168849 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:37Z","lastTransitionTime":"2026-01-28T15:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.176682 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovnkube-controller/3.log" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.177990 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovnkube-controller/2.log" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.182249 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerID="8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d" exitCode=1 Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.182301 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerDied","Data":"8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d"} Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.182340 4981 scope.go:117] "RemoveContainer" containerID="ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.183628 4981 scope.go:117] "RemoveContainer" containerID="8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d" Jan 28 15:04:37 crc kubenswrapper[4981]: E0128 15:04:37.183974 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.209751 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.230460 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.249830 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.266079 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.271541 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.271572 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.271583 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.271607 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.271620 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:37Z","lastTransitionTime":"2026-01-28T15:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.298245 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccc72e46d7d8ee857f06271ee2aea7b81fea10927e8907e07d8d065a133ac73a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:06Z\\\",\\\"message\\\":\\\"enshift-marketplace/certified-operators\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.214\\\\\\\", Port:50051, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0128 15:04:06.428926 6733 services_controller.go:360] Finished syncing service cluster-version-operator on namespace openshift-cluster-version for network=default : 1.823706ms\\\\nF0128 15:04:06.428930 6733 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certif\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:04:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:04:36.287387 7168 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:04:36.287802 7168 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:04:36.287852 7168 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:04:36.287899 7168 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:04:36.287923 7168 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:04:36.287951 7168 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:04:36.287970 7168 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:04:36.287994 7168 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:04:36.287953 7168 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 15:04:36.287981 7168 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:04:36.288060 7168 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:04:36.288092 7168 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:04:36.288138 7168 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:04:36.288170 7168 factory.go:656] Stopping watch factory\\\\nI0128 15:04:36.288269 7168 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:04:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.313123 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 19:58:17.610439732 +0000 UTC Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.314054 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.317739 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.317841 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.317979 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:37 crc kubenswrapper[4981]: E0128 15:04:37.318168 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:37 crc kubenswrapper[4981]: E0128 15:04:37.318291 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.318289 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:37 crc kubenswrapper[4981]: E0128 15:04:37.318539 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:37 crc kubenswrapper[4981]: E0128 15:04:37.318801 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.330240 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ab363e-7b23-41a2-b81a-f304940ea4e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401004bf4f52b13621d039da3ad10fa2800e605b8e574b16a9200f0447169a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdec11bcd96c80a3dcffa4a5da6e5541079caace1911ad9d3387310299c033b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44b16b8020efea12a40e946909e999169518fb90219b88c84df8eb2696b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.349647 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.365521 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.374337 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.374404 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.374417 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.374442 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.374457 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:37Z","lastTransitionTime":"2026-01-28T15:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.389594 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.408664 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.426214 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.443061 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.460424 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e787c9c633e01ce0e62e64cb5468c84dcf7452433437f827989301a9ef122368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:22Z\\\",\\\"message\\\":\\\"2026-01-28T15:03:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bafb8ab3-bd35-4172-a5e2-f1f5fce1ca97\\\\n2026-01-28T15:03:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bafb8ab3-bd35-4172-a5e2-f1f5fce1ca97 to /host/opt/cni/bin/\\\\n2026-01-28T15:03:37Z [verbose] multus-daemon started\\\\n2026-01-28T15:03:37Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:04:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.471209 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.476331 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.476357 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.476366 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.476379 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.476388 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:37Z","lastTransitionTime":"2026-01-28T15:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.490029 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.506326 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.525032 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2120f6-91ca-46f8-a729-d91d715e85d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc14b3cf4e388495b2e92f6b68c6f252a0896d6d92fc7bf6786b0ae938e8ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b4ad4c73d139cb2ab8966a0ebfe6edf1642de2069cbe4f080d209792127e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b4ad4c73d139cb2ab8966a0ebfe6edf1642de2069cbe4f080d209792127e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:37Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.579511 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.579558 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.579575 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.579597 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.579615 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:37Z","lastTransitionTime":"2026-01-28T15:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.682291 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.682335 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.682350 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.682370 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.682388 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:37Z","lastTransitionTime":"2026-01-28T15:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.785054 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.785099 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.785116 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.785136 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.785151 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:37Z","lastTransitionTime":"2026-01-28T15:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.887841 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.887908 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.887929 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.887960 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.887983 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:37Z","lastTransitionTime":"2026-01-28T15:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.991594 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.991649 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.991661 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.991676 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:37 crc kubenswrapper[4981]: I0128 15:04:37.991689 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:37Z","lastTransitionTime":"2026-01-28T15:04:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.094147 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.094202 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.094213 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.094229 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.094240 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:38Z","lastTransitionTime":"2026-01-28T15:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.188412 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovnkube-controller/3.log" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.192090 4981 scope.go:117] "RemoveContainer" containerID="8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d" Jan 28 15:04:38 crc kubenswrapper[4981]: E0128 15:04:38.192360 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.196853 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.196892 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.196903 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.196920 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.196934 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:38Z","lastTransitionTime":"2026-01-28T15:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.214813 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.234420 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.255245 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.277100 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e787c9c633e01ce0e62e64cb5468c84dcf7452433437f827989301a9ef122368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:22Z\\\",\\\"message\\\":\\\"2026-01-28T15:03:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bafb8ab3-bd35-4172-a5e2-f1f5fce1ca97\\\\n2026-01-28T15:03:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bafb8ab3-bd35-4172-a5e2-f1f5fce1ca97 to /host/opt/cni/bin/\\\\n2026-01-28T15:03:37Z [verbose] multus-daemon started\\\\n2026-01-28T15:03:37Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:04:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.294751 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.300172 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.300238 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.300256 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.300279 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.300299 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:38Z","lastTransitionTime":"2026-01-28T15:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.313303 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 00:38:58.747358169 +0000 UTC Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.313547 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2120f6-91ca-46f8-a729-d91d715e85d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc14b3cf4e388495b2e92f6b68c6f252a0896d6d92fc7bf6786b0ae938e8ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b4ad4c73d139cb2ab8966a0ebfe6edf1642de2069cbe4f080d209792127e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b4ad4c73d139cb2ab8966a0ebfe6edf1642de2069cbe4f080d209792127e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.334956 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.366585 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:04:36.287387 7168 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:04:36.287802 7168 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:04:36.287852 7168 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:04:36.287899 7168 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:04:36.287923 7168 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:04:36.287951 7168 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:04:36.287970 7168 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:04:36.287994 7168 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:04:36.287953 7168 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 15:04:36.287981 7168 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:04:36.288060 7168 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:04:36.288092 7168 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:04:36.288138 7168 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:04:36.288170 7168 factory.go:656] Stopping watch factory\\\\nI0128 15:04:36.288269 7168 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:04:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:04:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.388427 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.403978 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.404050 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.404074 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.404106 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.404133 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:38Z","lastTransitionTime":"2026-01-28T15:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.413947 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.434021 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.448790 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.466310 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.486106 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.503780 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ab363e-7b23-41a2-b81a-f304940ea4e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401004bf4f52b13621d039da3ad10fa2800e605b8e574b16a9200f0447169a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdec11bcd96c80a3dcffa4a5da6e5541079caace1911ad9d3387310299c033b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44b16b8020efea12a40e946909e999169518fb90219b88c84df8eb2696b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.506774 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.506830 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.506848 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.506870 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.506887 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:38Z","lastTransitionTime":"2026-01-28T15:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.523446 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.539051 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.555122 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:38Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.610360 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.610445 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.610471 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.610506 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.610525 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:38Z","lastTransitionTime":"2026-01-28T15:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.714177 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.714275 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.714299 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.714329 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.714350 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:38Z","lastTransitionTime":"2026-01-28T15:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.817328 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.817381 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.817393 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.817414 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.817428 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:38Z","lastTransitionTime":"2026-01-28T15:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.920482 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.920542 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.920565 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.920593 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:38 crc kubenswrapper[4981]: I0128 15:04:38.920617 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:38Z","lastTransitionTime":"2026-01-28T15:04:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.023133 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.023213 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.023227 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.023245 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.023258 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:39Z","lastTransitionTime":"2026-01-28T15:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.127162 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.127264 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.127292 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.127329 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.127354 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:39Z","lastTransitionTime":"2026-01-28T15:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.230759 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.230839 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.230880 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.230914 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.230936 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:39Z","lastTransitionTime":"2026-01-28T15:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.313470 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 09:23:13.571856644 +0000 UTC Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.317938 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.318032 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.318032 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.318231 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:39 crc kubenswrapper[4981]: E0128 15:04:39.318236 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:39 crc kubenswrapper[4981]: E0128 15:04:39.318392 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:39 crc kubenswrapper[4981]: E0128 15:04:39.318577 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:39 crc kubenswrapper[4981]: E0128 15:04:39.318946 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.333920 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.333961 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.333971 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.333989 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.334002 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:39Z","lastTransitionTime":"2026-01-28T15:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.336699 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"62ab363e-7b23-41a2-b81a-f304940ea4e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://401004bf4f52b13621d039da3ad10fa2800e605b8e574b16a9200f0447169a8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4fdec11bcd96c80a3dcffa4a5da6e5541079caace1911ad9d3387310299c033b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e44b16b8020efea12a40e946909e999169518fb90219b88c84df8eb2696b249\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed5787337ac4b079cb78dfaa42a6a1cb34b76fad5766195bc562f6d317ed66a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.358566 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:32Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5443da934188d29923ad4a6ac74972e6efa1d6be40d172090abc575b8bacc678\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.376435 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"67525d77-715e-4ec3-bdbb-6854657355c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14916a5adfea50ad11d7f186e97f5db2b0cfde45cd5acfd69389016f0828afd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gg6bb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rcgbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.397726 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76561bd4-81e0-4978-ac44-fb6bf5f60c7d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c0c0d1a607b105958eef1fccd244456d6bcbcc8b6406f63de8f50f566a60cdc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d78620116de460f0a3705207814e069c7e36b0d9fb903e0fbf210ae441e1272\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://85f7abb832f9f2921eab1c0aa1964f363581bf7864ea73f3e2710065a1b77988\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae787077e8232e5d23db2f6a95ab315bcc4e398dadf489091f0dcbdd1b381736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c05dc058f34f956f86d8a5797ab18a651c8b703e1b1e3b9c9509daf06b379f04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cc1eacdbe47a82f8d171420e17c507f45b8d0ed36b3bbb2711776a6514717fc4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4174aa9321eb24686e59eb0494e8bd846897d355c6f6f00370a34a37675202b7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-878rf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-4dgt8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.418798 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4ddd8a8-aa37-436c-baea-4d2a7017c609\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://887c8d93191c5631b9d11eec28e5d21c08e09898865624b9ac5d7fa901c5c8af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e3ed5f82a5895503c428ba0942938e124970fa92e2059ea8d3a85e5a8516b02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9qwm8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:48Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-snb84\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.436325 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.436358 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.436367 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.436381 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.436393 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:39Z","lastTransitionTime":"2026-01-28T15:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.443594 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f1b26ee-5569-4a25-851d-f1e23f13870a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"cure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 15:03:29.436076 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 15:03:29.436080 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 15:03:29.436083 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 15:03:29.436086 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 15:03:29.436168 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0128 15:03:29.440844 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769612603\\\\\\\\\\\\\\\" (2026-01-28 15:03:22 +0000 UTC to 2026-02-27 15:03:23 +0000 UTC (now=2026-01-28 15:03:29.440813678 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.440974 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769612609\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769612609\\\\\\\\\\\\\\\" (2026-01-28 14:03:28 +0000 UTC to 2027-01-28 14:03:28 +0000 UTC (now=2026-01-28 15:03:29.440952371 +0000 UTC))\\\\\\\"\\\\nI0128 15:03:29.441000 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0128 15:03:29.441024 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0128 15:03:29.441047 1 dynamic_serving_content.go:135] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-4250230660/tls.crt::/tmp/serving-cert-4250230660/tls.key\\\\\\\"\\\\nI0128 15:03:29.441202 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nF0128 15:03:29.441828 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.457530 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4b68a4ce25a52e2082b83606691b9787b930ef30e72f550c4eab470426f37e58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1c31a4d119349fe05cb34a5c3319d2953328a6b058de47f5b63758bc83b1e85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.476099 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.501557 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lwvh4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cd6b29e-682c-4aec-b039-70d6d75cbcbc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e787c9c633e01ce0e62e64cb5468c84dcf7452433437f827989301a9ef122368\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:22Z\\\",\\\"message\\\":\\\"2026-01-28T15:03:36+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_bafb8ab3-bd35-4172-a5e2-f1f5fce1ca97\\\\n2026-01-28T15:03:36+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_bafb8ab3-bd35-4172-a5e2-f1f5fce1ca97 to /host/opt/cni/bin/\\\\n2026-01-28T15:03:37Z [verbose] multus-daemon started\\\\n2026-01-28T15:03:37Z [verbose] Readiness Indicator file check\\\\n2026-01-28T15:04:22Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:04:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wkzd2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lwvh4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.517879 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-8rsts" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d5fda60c-a87b-4810-81df-4c7717d34ac1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:49Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzdzb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:49Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-8rsts\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.532593 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2120f6-91ca-46f8-a729-d91d715e85d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1dc14b3cf4e388495b2e92f6b68c6f252a0896d6d92fc7bf6786b0ae938e8ba2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://03b4ad4c73d139cb2ab8966a0ebfe6edf1642de2069cbe4f080d209792127e19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://03b4ad4c73d139cb2ab8966a0ebfe6edf1642de2069cbe4f080d209792127e19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.539871 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.539934 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.539954 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.539980 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.540000 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:39Z","lastTransitionTime":"2026-01-28T15:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.551973 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.569136 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83a3ae16-b145-450b-9313-31db84959fca\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eec5840a1b6cdfbcb1bf5a9df1b04f52a1f76603cf465250c03bc699b9ab581b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0010efa90fe7d096fc12bd714e9f0bfccd2f856d08e47e3160d2d68cd9e5e541\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1200058c6a60d76321113ee541f6cca460e2249f5fb66fec03efbaafa97d526d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.589990 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:30Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db20ae953465cec70747a601363194d4c296682038faf283b8d3020c6ff51eba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.608784 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:29Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.624891 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kfmjv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"072268dc-a2f0-47ef-86ae-1e7504b832b5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a08dadaa0ff04e4b6bf903a45d9c43b58ef2ecbab2c124d2465b2a0983c502df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhbhq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kfmjv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.640800 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-dp2b6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff8ae630-1ed6-4dd3-97b6-f93e12901e6a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27123dd4bc612d317ce50722a7d2d7f636e8d242b8f46602fc8fa03d037f238b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tdtjc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:34Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-dp2b6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.643259 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.643429 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.643445 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.643469 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.643484 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:39Z","lastTransitionTime":"2026-01-28T15:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.668850 4981 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbdbd481-8604-433f-823e-d77a8b8517a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T15:04:36Z\\\",\\\"message\\\":\\\"sip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 15:04:36.287387 7168 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 15:04:36.287802 7168 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 15:04:36.287852 7168 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 15:04:36.287899 7168 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 15:04:36.287923 7168 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 15:04:36.287951 7168 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 15:04:36.287970 7168 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 15:04:36.287994 7168 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 15:04:36.287953 7168 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 15:04:36.287981 7168 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 15:04:36.288060 7168 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 15:04:36.288092 7168 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 15:04:36.288138 7168 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 15:04:36.288170 7168 factory.go:656] Stopping watch factory\\\\nI0128 15:04:36.288269 7168 ovnkube.go:599] Stopped ovnkube\\\\nI0128 15:04:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T15:04:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T15:03:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T15:03:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T15:03:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fnr8\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T15:03:35Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-2ss7x\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:39Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.748021 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.748101 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.748129 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.748161 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.748218 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:39Z","lastTransitionTime":"2026-01-28T15:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.850966 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.851036 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.851056 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.851084 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.851102 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:39Z","lastTransitionTime":"2026-01-28T15:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.954308 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.954358 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.954368 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.954384 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:39 crc kubenswrapper[4981]: I0128 15:04:39.954397 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:39Z","lastTransitionTime":"2026-01-28T15:04:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.056852 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.056910 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.056920 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.056934 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.056943 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:40Z","lastTransitionTime":"2026-01-28T15:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.160037 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.160087 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.160103 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.160128 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.160143 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:40Z","lastTransitionTime":"2026-01-28T15:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.269393 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.269469 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.269484 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.269507 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.269524 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:40Z","lastTransitionTime":"2026-01-28T15:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.313789 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:35:49.370651832 +0000 UTC Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.372801 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.372862 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.372880 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.372904 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.372923 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:40Z","lastTransitionTime":"2026-01-28T15:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.475706 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.475791 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.475803 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.475820 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.475835 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:40Z","lastTransitionTime":"2026-01-28T15:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.578627 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.578690 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.578710 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.578736 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.578756 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:40Z","lastTransitionTime":"2026-01-28T15:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.682340 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.682408 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.682429 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.682453 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.682470 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:40Z","lastTransitionTime":"2026-01-28T15:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.784934 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.784979 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.784991 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.785009 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.785022 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:40Z","lastTransitionTime":"2026-01-28T15:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.887894 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.887954 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.887977 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.888032 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.888057 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:40Z","lastTransitionTime":"2026-01-28T15:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.991432 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.991511 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.991536 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.991568 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:40 crc kubenswrapper[4981]: I0128 15:04:40.991591 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:40Z","lastTransitionTime":"2026-01-28T15:04:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.095090 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.095143 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.095156 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.095174 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.095203 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:41Z","lastTransitionTime":"2026-01-28T15:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.197856 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.197899 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.197910 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.198213 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.198258 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:41Z","lastTransitionTime":"2026-01-28T15:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.301459 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.301529 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.301546 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.301569 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.301585 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:41Z","lastTransitionTime":"2026-01-28T15:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.314642 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 04:24:00.959955521 +0000 UTC Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.318148 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.318175 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.318255 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.318260 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:41 crc kubenswrapper[4981]: E0128 15:04:41.318299 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:41 crc kubenswrapper[4981]: E0128 15:04:41.318386 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:41 crc kubenswrapper[4981]: E0128 15:04:41.318541 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:41 crc kubenswrapper[4981]: E0128 15:04:41.318720 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.404472 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.404528 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.404543 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.404564 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.404580 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:41Z","lastTransitionTime":"2026-01-28T15:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.508078 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.508139 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.508152 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.508174 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.508247 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:41Z","lastTransitionTime":"2026-01-28T15:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.611046 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.611111 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.611134 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.611162 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.611230 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:41Z","lastTransitionTime":"2026-01-28T15:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.715043 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.715133 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.715227 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.715270 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.715295 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:41Z","lastTransitionTime":"2026-01-28T15:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.819021 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.819096 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.819135 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.819167 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.819242 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:41Z","lastTransitionTime":"2026-01-28T15:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.922340 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.922410 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.922434 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.922465 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:41 crc kubenswrapper[4981]: I0128 15:04:41.922487 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:41Z","lastTransitionTime":"2026-01-28T15:04:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.025835 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.025907 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.025929 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.025955 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.025974 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:42Z","lastTransitionTime":"2026-01-28T15:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.129158 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.129234 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.129249 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.129268 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.129282 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:42Z","lastTransitionTime":"2026-01-28T15:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.232820 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.232874 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.232885 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.232903 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.232915 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:42Z","lastTransitionTime":"2026-01-28T15:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.315564 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 18:24:05.09768923 +0000 UTC Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.335793 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.335834 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.335843 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.335858 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.335868 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:42Z","lastTransitionTime":"2026-01-28T15:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.438484 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.438543 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.438558 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.438580 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.438596 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:42Z","lastTransitionTime":"2026-01-28T15:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.541851 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.541913 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.541928 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.541951 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.541967 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:42Z","lastTransitionTime":"2026-01-28T15:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.651165 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.651254 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.651271 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.651295 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.651313 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:42Z","lastTransitionTime":"2026-01-28T15:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.754642 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.754701 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.754714 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.754734 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.754749 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:42Z","lastTransitionTime":"2026-01-28T15:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.858150 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.858228 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.858239 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.858258 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.858268 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:42Z","lastTransitionTime":"2026-01-28T15:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.962572 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.962635 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.962654 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.962679 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:42 crc kubenswrapper[4981]: I0128 15:04:42.962696 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:42Z","lastTransitionTime":"2026-01-28T15:04:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.065461 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.065529 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.065545 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.065572 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.065590 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:43Z","lastTransitionTime":"2026-01-28T15:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.168771 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.168878 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.168896 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.168924 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.168945 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:43Z","lastTransitionTime":"2026-01-28T15:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.272418 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.272482 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.272500 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.272525 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.272546 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:43Z","lastTransitionTime":"2026-01-28T15:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.316311 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 21:18:12.765015178 +0000 UTC Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.318710 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.318845 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.318845 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:43 crc kubenswrapper[4981]: E0128 15:04:43.318941 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.318971 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:43 crc kubenswrapper[4981]: E0128 15:04:43.319142 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:43 crc kubenswrapper[4981]: E0128 15:04:43.319283 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:43 crc kubenswrapper[4981]: E0128 15:04:43.319535 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.375937 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.375973 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.375981 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.375993 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.376002 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:43Z","lastTransitionTime":"2026-01-28T15:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.478654 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.478716 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.478733 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.478756 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.478772 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:43Z","lastTransitionTime":"2026-01-28T15:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.582036 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.582108 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.582124 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.582151 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.582168 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:43Z","lastTransitionTime":"2026-01-28T15:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.685889 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.685962 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.685984 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.686015 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.686038 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:43Z","lastTransitionTime":"2026-01-28T15:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.789174 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.789293 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.789316 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.789344 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.789360 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:43Z","lastTransitionTime":"2026-01-28T15:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.891808 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.891860 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.891880 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.891902 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.891918 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:43Z","lastTransitionTime":"2026-01-28T15:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.994320 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.994378 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.994391 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.994408 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:43 crc kubenswrapper[4981]: I0128 15:04:43.994420 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:43Z","lastTransitionTime":"2026-01-28T15:04:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.097428 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.097470 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.097478 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.097491 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.097500 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:44Z","lastTransitionTime":"2026-01-28T15:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.200068 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.200112 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.200123 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.200138 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.200149 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:44Z","lastTransitionTime":"2026-01-28T15:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.302538 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.302600 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.302618 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.302645 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.302664 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:44Z","lastTransitionTime":"2026-01-28T15:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.316902 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:46:18.136334504 +0000 UTC Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.332796 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.406117 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.406242 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.406271 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.406303 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.406329 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:44Z","lastTransitionTime":"2026-01-28T15:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.509167 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.509233 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.509242 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.509256 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.509265 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:44Z","lastTransitionTime":"2026-01-28T15:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.612573 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.612620 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.612631 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.612649 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.612661 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:44Z","lastTransitionTime":"2026-01-28T15:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.718265 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.718339 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.718361 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.718391 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.718414 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:44Z","lastTransitionTime":"2026-01-28T15:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.821256 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.821302 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.821314 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.821330 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.821341 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:44Z","lastTransitionTime":"2026-01-28T15:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.923665 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.923696 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.923704 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.923718 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:44 crc kubenswrapper[4981]: I0128 15:04:44.923728 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:44Z","lastTransitionTime":"2026-01-28T15:04:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.026398 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.026454 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.026473 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.026495 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.026513 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:45Z","lastTransitionTime":"2026-01-28T15:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.130029 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.130093 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.130110 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.130135 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.130151 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:45Z","lastTransitionTime":"2026-01-28T15:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.232678 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.232722 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.232731 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.232744 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.232754 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:45Z","lastTransitionTime":"2026-01-28T15:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.317662 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:06:27.64411893 +0000 UTC Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.317956 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.318021 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:45 crc kubenswrapper[4981]: E0128 15:04:45.318227 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.318238 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:45 crc kubenswrapper[4981]: E0128 15:04:45.318338 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:45 crc kubenswrapper[4981]: E0128 15:04:45.318480 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.318794 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:45 crc kubenswrapper[4981]: E0128 15:04:45.318939 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.336097 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.336162 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.336222 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.336255 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.336278 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:45Z","lastTransitionTime":"2026-01-28T15:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.439884 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.440290 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.440309 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.440336 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.440359 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:45Z","lastTransitionTime":"2026-01-28T15:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.545335 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.545413 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.545437 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.545474 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.545505 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:45Z","lastTransitionTime":"2026-01-28T15:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.649803 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.649880 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.649901 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.649926 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.649947 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:45Z","lastTransitionTime":"2026-01-28T15:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.753385 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.753446 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.753458 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.753477 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.753491 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:45Z","lastTransitionTime":"2026-01-28T15:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.855982 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.856032 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.856047 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.856067 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.856080 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:45Z","lastTransitionTime":"2026-01-28T15:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.959499 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.959572 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.959596 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.959630 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:45 crc kubenswrapper[4981]: I0128 15:04:45.959650 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:45Z","lastTransitionTime":"2026-01-28T15:04:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.062791 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.062864 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.062882 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.062908 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.062932 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:46Z","lastTransitionTime":"2026-01-28T15:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.166394 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.166447 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.166461 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.166480 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.166491 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:46Z","lastTransitionTime":"2026-01-28T15:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.269339 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.269385 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.269397 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.269412 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.269423 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:46Z","lastTransitionTime":"2026-01-28T15:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.317977 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:41:14.330374113 +0000 UTC Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.371854 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.371933 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.371946 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.371988 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.372004 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:46Z","lastTransitionTime":"2026-01-28T15:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.475534 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.475631 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.475658 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.475692 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.475718 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:46Z","lastTransitionTime":"2026-01-28T15:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.579244 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.579340 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.579355 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.579377 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.579393 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:46Z","lastTransitionTime":"2026-01-28T15:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.682618 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.682677 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.682697 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.682724 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.682740 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:46Z","lastTransitionTime":"2026-01-28T15:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.766816 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.766872 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.766887 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.766907 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.766923 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:46Z","lastTransitionTime":"2026-01-28T15:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:46 crc kubenswrapper[4981]: E0128 15:04:46.785449 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.790857 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.790900 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.790910 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.790926 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.790940 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:46Z","lastTransitionTime":"2026-01-28T15:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:46 crc kubenswrapper[4981]: E0128 15:04:46.812821 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.818383 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.818468 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.818487 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.818520 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.818538 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:46Z","lastTransitionTime":"2026-01-28T15:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:46 crc kubenswrapper[4981]: E0128 15:04:46.834637 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.840397 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.840460 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.840479 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.840503 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.840518 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:46Z","lastTransitionTime":"2026-01-28T15:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:46 crc kubenswrapper[4981]: E0128 15:04:46.861682 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.867466 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.867537 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.867553 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.867581 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.867603 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:46Z","lastTransitionTime":"2026-01-28T15:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:46 crc kubenswrapper[4981]: E0128 15:04:46.885718 4981 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404544Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865344Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T15:04:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e730fd4b-ce6e-4137-9fbe-a43501684872\\\",\\\"systemUUID\\\":\\\"bdcb13d9-b39a-47f8-8de2-451381277fbd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T15:04:46Z is after 2025-08-24T17:21:41Z" Jan 28 15:04:46 crc kubenswrapper[4981]: E0128 15:04:46.885958 4981 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.889330 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.889389 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.889409 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.889433 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.889449 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:46Z","lastTransitionTime":"2026-01-28T15:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.993481 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.993565 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.993586 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.993618 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:46 crc kubenswrapper[4981]: I0128 15:04:46.993637 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:46Z","lastTransitionTime":"2026-01-28T15:04:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.097154 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.097259 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.097271 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.097294 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.097307 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:47Z","lastTransitionTime":"2026-01-28T15:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.200879 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.200947 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.200960 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.200984 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.200997 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:47Z","lastTransitionTime":"2026-01-28T15:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.304025 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.304089 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.304107 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.304134 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.304152 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:47Z","lastTransitionTime":"2026-01-28T15:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.318359 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 19:45:26.4306983 +0000 UTC Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.318585 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.318622 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.318633 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.318730 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:47 crc kubenswrapper[4981]: E0128 15:04:47.318886 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:47 crc kubenswrapper[4981]: E0128 15:04:47.319017 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:47 crc kubenswrapper[4981]: E0128 15:04:47.319128 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:47 crc kubenswrapper[4981]: E0128 15:04:47.319219 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.408175 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.408301 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.408324 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.408356 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.408382 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:47Z","lastTransitionTime":"2026-01-28T15:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.511988 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.512052 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.512067 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.512085 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.512099 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:47Z","lastTransitionTime":"2026-01-28T15:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.615132 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.615244 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.615270 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.615300 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.615323 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:47Z","lastTransitionTime":"2026-01-28T15:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.718998 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.719067 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.719080 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.719101 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.719115 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:47Z","lastTransitionTime":"2026-01-28T15:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.822817 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.822867 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.822883 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.822906 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.822923 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:47Z","lastTransitionTime":"2026-01-28T15:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.926300 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.926364 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.926381 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.926407 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:47 crc kubenswrapper[4981]: I0128 15:04:47.926428 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:47Z","lastTransitionTime":"2026-01-28T15:04:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.029108 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.029231 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.029253 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.029289 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.029312 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:48Z","lastTransitionTime":"2026-01-28T15:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.132556 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.132641 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.132661 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.132691 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.132711 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:48Z","lastTransitionTime":"2026-01-28T15:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.235996 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.236056 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.236075 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.236101 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.236136 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:48Z","lastTransitionTime":"2026-01-28T15:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.318525 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:58:01.513480197 +0000 UTC Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.339332 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.339405 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.339427 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.339456 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.339481 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:48Z","lastTransitionTime":"2026-01-28T15:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.442326 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.442395 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.442412 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.442437 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.442454 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:48Z","lastTransitionTime":"2026-01-28T15:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.546274 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.546333 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.546346 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.546365 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.546378 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:48Z","lastTransitionTime":"2026-01-28T15:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.650332 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.650383 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.650394 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.650413 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.650424 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:48Z","lastTransitionTime":"2026-01-28T15:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.753467 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.753531 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.753549 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.753574 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.753591 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:48Z","lastTransitionTime":"2026-01-28T15:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.857094 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.857147 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.857159 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.857179 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.857236 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:48Z","lastTransitionTime":"2026-01-28T15:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.960411 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.960485 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.960503 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.960532 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:48 crc kubenswrapper[4981]: I0128 15:04:48.960551 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:48Z","lastTransitionTime":"2026-01-28T15:04:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.063747 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.063787 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.063796 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.063809 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.063820 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:49Z","lastTransitionTime":"2026-01-28T15:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.167093 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.167172 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.167272 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.167306 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.167380 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:49Z","lastTransitionTime":"2026-01-28T15:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.269664 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.269729 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.269748 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.269772 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.269789 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:49Z","lastTransitionTime":"2026-01-28T15:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.318156 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:49 crc kubenswrapper[4981]: E0128 15:04:49.318291 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.318405 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.318498 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.318635 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:49 crc kubenswrapper[4981]: E0128 15:04:49.318639 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.318723 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 18:19:57.801948781 +0000 UTC Jan 28 15:04:49 crc kubenswrapper[4981]: E0128 15:04:49.319422 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.319851 4981 scope.go:117] "RemoveContainer" containerID="8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d" Jan 28 15:04:49 crc kubenswrapper[4981]: E0128 15:04:49.320082 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" Jan 28 15:04:49 crc kubenswrapper[4981]: E0128 15:04:49.320143 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.372698 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.372749 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.372764 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.372787 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.372805 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:49Z","lastTransitionTime":"2026-01-28T15:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.484272 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.484366 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.484384 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.484424 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.484448 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:49Z","lastTransitionTime":"2026-01-28T15:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.500292 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-4dgt8" podStartSLOduration=75.50026135 podStartE2EDuration="1m15.50026135s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:04:49.500032345 +0000 UTC m=+100.952190616" watchObservedRunningTime="2026-01-28 15:04:49.50026135 +0000 UTC m=+100.952419611" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.557070 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-snb84" podStartSLOduration=74.557041616 podStartE2EDuration="1m14.557041616s" podCreationTimestamp="2026-01-28 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:04:49.520905451 +0000 UTC m=+100.973063692" watchObservedRunningTime="2026-01-28 15:04:49.557041616 +0000 UTC m=+101.009199887" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.557277 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=5.557270152 podStartE2EDuration="5.557270152s" podCreationTimestamp="2026-01-28 15:04:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:04:49.555421566 +0000 UTC m=+101.007579827" watchObservedRunningTime="2026-01-28 15:04:49.557270152 +0000 UTC m=+101.009428433" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.588240 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.588284 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.588297 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.588315 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.588327 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:49Z","lastTransitionTime":"2026-01-28T15:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.604763 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.604727087 podStartE2EDuration="45.604727087s" podCreationTimestamp="2026-01-28 15:04:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:04:49.578095587 +0000 UTC m=+101.030253869" watchObservedRunningTime="2026-01-28 15:04:49.604727087 +0000 UTC m=+101.056885368" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.625552 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podStartSLOduration=75.625522872 podStartE2EDuration="1m15.625522872s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:04:49.625355718 +0000 UTC m=+101.077514009" watchObservedRunningTime="2026-01-28 15:04:49.625522872 +0000 UTC m=+101.077681153" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.664012 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=79.663984054 podStartE2EDuration="1m19.663984054s" podCreationTimestamp="2026-01-28 15:03:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:04:49.663335078 +0000 UTC m=+101.115493359" watchObservedRunningTime="2026-01-28 15:04:49.663984054 +0000 UTC m=+101.116142335" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.697728 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.697788 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.697801 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.697833 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.697846 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:49Z","lastTransitionTime":"2026-01-28T15:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.747707 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lwvh4" podStartSLOduration=75.747677327 podStartE2EDuration="1m15.747677327s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:04:49.747068461 +0000 UTC m=+101.199226722" watchObservedRunningTime="2026-01-28 15:04:49.747677327 +0000 UTC m=+101.199835568" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.778177 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=24.778145721 podStartE2EDuration="24.778145721s" podCreationTimestamp="2026-01-28 15:04:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:04:49.7619424 +0000 UTC m=+101.214100661" watchObservedRunningTime="2026-01-28 15:04:49.778145721 +0000 UTC m=+101.230303962" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.792570 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-dp2b6" podStartSLOduration=75.792536487 podStartE2EDuration="1m15.792536487s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:04:49.792369533 +0000 UTC m=+101.244527804" watchObservedRunningTime="2026-01-28 15:04:49.792536487 +0000 UTC m=+101.244694738" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.801796 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.801846 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.801860 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.801880 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.801893 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:49Z","lastTransitionTime":"2026-01-28T15:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.842227 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=74.842172526 podStartE2EDuration="1m14.842172526s" podCreationTimestamp="2026-01-28 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:04:49.841717315 +0000 UTC m=+101.293875556" watchObservedRunningTime="2026-01-28 15:04:49.842172526 +0000 UTC m=+101.294330777" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.904503 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.904573 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.904593 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.904625 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:49 crc kubenswrapper[4981]: I0128 15:04:49.904644 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:49Z","lastTransitionTime":"2026-01-28T15:04:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.007382 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.007443 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.007454 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.007469 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.007479 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:50Z","lastTransitionTime":"2026-01-28T15:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.110365 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.110425 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.110442 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.110466 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.110512 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:50Z","lastTransitionTime":"2026-01-28T15:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.213312 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.213380 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.213400 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.213425 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.213443 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:50Z","lastTransitionTime":"2026-01-28T15:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.316063 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.316103 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.316112 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.316128 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.316138 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:50Z","lastTransitionTime":"2026-01-28T15:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.319252 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 20:09:40.677745758 +0000 UTC Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.418820 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.418898 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.418923 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.418954 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.418977 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:50Z","lastTransitionTime":"2026-01-28T15:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.522094 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.522168 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.522220 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.522247 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.522265 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:50Z","lastTransitionTime":"2026-01-28T15:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.624772 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.624829 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.624846 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.624869 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.624886 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:50Z","lastTransitionTime":"2026-01-28T15:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.728360 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.728438 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.728460 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.728493 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.728514 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:50Z","lastTransitionTime":"2026-01-28T15:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.831635 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.831714 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.831735 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.831766 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.831787 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:50Z","lastTransitionTime":"2026-01-28T15:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.934843 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.934901 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.934917 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.934941 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:50 crc kubenswrapper[4981]: I0128 15:04:50.934959 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:50Z","lastTransitionTime":"2026-01-28T15:04:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.039124 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.039222 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.039241 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.039266 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.039283 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:51Z","lastTransitionTime":"2026-01-28T15:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.142749 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.142799 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.142815 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.142837 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.142859 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:51Z","lastTransitionTime":"2026-01-28T15:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.244516 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.244547 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.244556 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.244569 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.244578 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:51Z","lastTransitionTime":"2026-01-28T15:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.318252 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.318381 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.318381 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:51 crc kubenswrapper[4981]: E0128 15:04:51.318557 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:51 crc kubenswrapper[4981]: E0128 15:04:51.318691 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.318741 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:51 crc kubenswrapper[4981]: E0128 15:04:51.318952 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:51 crc kubenswrapper[4981]: E0128 15:04:51.319059 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.319434 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 03:00:50.107910426 +0000 UTC Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.346745 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.346840 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.346898 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.346925 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.346943 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:51Z","lastTransitionTime":"2026-01-28T15:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.450098 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.450173 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.450234 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.450262 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.450282 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:51Z","lastTransitionTime":"2026-01-28T15:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.554740 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.554803 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.554820 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.554845 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.554861 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:51Z","lastTransitionTime":"2026-01-28T15:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.658076 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.658216 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.658251 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.658295 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.658318 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:51Z","lastTransitionTime":"2026-01-28T15:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.761381 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.761429 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.761440 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.761457 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.761471 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:51Z","lastTransitionTime":"2026-01-28T15:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.864995 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.865057 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.865075 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.865099 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.865116 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:51Z","lastTransitionTime":"2026-01-28T15:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.968138 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.968244 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.968263 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.968288 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:51 crc kubenswrapper[4981]: I0128 15:04:51.968307 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:51Z","lastTransitionTime":"2026-01-28T15:04:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.070925 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.071020 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.071070 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.071100 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.071122 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:52Z","lastTransitionTime":"2026-01-28T15:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.182762 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.182853 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.182869 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.182898 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.182930 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:52Z","lastTransitionTime":"2026-01-28T15:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.285522 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.285597 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.285615 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.285638 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.285655 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:52Z","lastTransitionTime":"2026-01-28T15:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.320614 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:03:33.821892808 +0000 UTC Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.388666 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.388771 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.388788 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.388816 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.388836 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:52Z","lastTransitionTime":"2026-01-28T15:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.491651 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.491715 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.491731 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.491754 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.491770 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:52Z","lastTransitionTime":"2026-01-28T15:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.599427 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.599504 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.599524 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.599568 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.599587 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:52Z","lastTransitionTime":"2026-01-28T15:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.702485 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.702530 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.702541 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.702558 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.702570 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:52Z","lastTransitionTime":"2026-01-28T15:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.806070 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.806127 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.806144 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.806168 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.806225 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:52Z","lastTransitionTime":"2026-01-28T15:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.908688 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.908748 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.908766 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.908789 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:52 crc kubenswrapper[4981]: I0128 15:04:52.908805 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:52Z","lastTransitionTime":"2026-01-28T15:04:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.015496 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.015561 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.015579 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.015602 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.015620 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:53Z","lastTransitionTime":"2026-01-28T15:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.118617 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.118680 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.118703 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.118732 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.118753 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:53Z","lastTransitionTime":"2026-01-28T15:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.220940 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.220966 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.220975 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.220990 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.221002 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:53Z","lastTransitionTime":"2026-01-28T15:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.318253 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:53 crc kubenswrapper[4981]: E0128 15:04:53.318496 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.318665 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.318677 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.318290 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:53 crc kubenswrapper[4981]: E0128 15:04:53.318985 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:53 crc kubenswrapper[4981]: E0128 15:04:53.319068 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:53 crc kubenswrapper[4981]: E0128 15:04:53.319138 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.321529 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:00:25.87027918 +0000 UTC Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.323591 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.324051 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.324125 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.324587 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.324663 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:53Z","lastTransitionTime":"2026-01-28T15:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.428727 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.428787 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.428801 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.428824 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.428840 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:53Z","lastTransitionTime":"2026-01-28T15:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.532301 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.532363 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.532381 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.532407 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.532428 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:53Z","lastTransitionTime":"2026-01-28T15:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.636393 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.636490 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.636548 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.636585 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.636610 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:53Z","lastTransitionTime":"2026-01-28T15:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.740652 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.740725 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.740745 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.740775 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.740795 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:53Z","lastTransitionTime":"2026-01-28T15:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.748038 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs\") pod \"network-metrics-daemon-8rsts\" (UID: \"d5fda60c-a87b-4810-81df-4c7717d34ac1\") " pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:53 crc kubenswrapper[4981]: E0128 15:04:53.748245 4981 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:04:53 crc kubenswrapper[4981]: E0128 15:04:53.748405 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs podName:d5fda60c-a87b-4810-81df-4c7717d34ac1 nodeName:}" failed. No retries permitted until 2026-01-28 15:05:57.748344787 +0000 UTC m=+169.200503108 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs") pod "network-metrics-daemon-8rsts" (UID: "d5fda60c-a87b-4810-81df-4c7717d34ac1") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.842847 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.842905 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.842922 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.842942 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.842957 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:53Z","lastTransitionTime":"2026-01-28T15:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.947013 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.947090 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.947104 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.947128 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:53 crc kubenswrapper[4981]: I0128 15:04:53.947145 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:53Z","lastTransitionTime":"2026-01-28T15:04:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.051565 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.051660 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.051679 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.051738 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.051755 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:54Z","lastTransitionTime":"2026-01-28T15:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.155288 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.155373 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.155391 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.155417 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.155436 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:54Z","lastTransitionTime":"2026-01-28T15:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.258378 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.258440 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.258459 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.258484 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.258501 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:54Z","lastTransitionTime":"2026-01-28T15:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.322581 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 08:09:02.689807965 +0000 UTC Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.361375 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.361446 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.361468 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.361497 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.361519 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:54Z","lastTransitionTime":"2026-01-28T15:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.464587 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.464644 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.464661 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.464691 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.464708 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:54Z","lastTransitionTime":"2026-01-28T15:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.568300 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.568382 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.568416 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.568447 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.568468 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:54Z","lastTransitionTime":"2026-01-28T15:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.671326 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.671454 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.671471 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.671491 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.671506 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:54Z","lastTransitionTime":"2026-01-28T15:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.773800 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.773857 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.773875 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.773899 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.773915 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:54Z","lastTransitionTime":"2026-01-28T15:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.876433 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.876477 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.876491 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.876511 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.876525 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:54Z","lastTransitionTime":"2026-01-28T15:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.979255 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.979318 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.979336 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.979360 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:54 crc kubenswrapper[4981]: I0128 15:04:54.979377 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:54Z","lastTransitionTime":"2026-01-28T15:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.081787 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.081871 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.081889 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.081917 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.081935 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:55Z","lastTransitionTime":"2026-01-28T15:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.184103 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.184241 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.184268 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.184301 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.184323 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:55Z","lastTransitionTime":"2026-01-28T15:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.286886 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.286963 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.286986 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.287014 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.287035 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:55Z","lastTransitionTime":"2026-01-28T15:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.318602 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:55 crc kubenswrapper[4981]: E0128 15:04:55.318734 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.318602 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.318776 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:55 crc kubenswrapper[4981]: E0128 15:04:55.318830 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.318602 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:55 crc kubenswrapper[4981]: E0128 15:04:55.318925 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:55 crc kubenswrapper[4981]: E0128 15:04:55.319121 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.323090 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 03:37:49.874743311 +0000 UTC Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.391348 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.391432 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.391460 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.391495 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.391529 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:55Z","lastTransitionTime":"2026-01-28T15:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.495531 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.495607 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.495621 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.495666 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.495683 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:55Z","lastTransitionTime":"2026-01-28T15:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.599103 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.599179 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.599253 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.599287 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.599315 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:55Z","lastTransitionTime":"2026-01-28T15:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.703869 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.703950 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.703964 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.703988 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.704006 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:55Z","lastTransitionTime":"2026-01-28T15:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.807911 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.807972 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.807986 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.808009 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.808027 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:55Z","lastTransitionTime":"2026-01-28T15:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.910694 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.910739 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.910750 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.910765 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:55 crc kubenswrapper[4981]: I0128 15:04:55.910778 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:55Z","lastTransitionTime":"2026-01-28T15:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.013518 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.013602 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.013624 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.013651 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.013668 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:56Z","lastTransitionTime":"2026-01-28T15:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.116747 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.116807 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.116826 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.116848 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.116865 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:56Z","lastTransitionTime":"2026-01-28T15:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.220285 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.220355 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.220378 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.220408 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.220429 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:56Z","lastTransitionTime":"2026-01-28T15:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.323356 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 06:37:28.216252139 +0000 UTC Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.323776 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.323826 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.323846 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.323868 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.323884 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:56Z","lastTransitionTime":"2026-01-28T15:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.426889 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.426948 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.426965 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.426988 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.427004 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:56Z","lastTransitionTime":"2026-01-28T15:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.529824 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.529856 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.529866 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.529884 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.529895 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:56Z","lastTransitionTime":"2026-01-28T15:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.632502 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.632570 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.632590 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.632617 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.632634 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:56Z","lastTransitionTime":"2026-01-28T15:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.735307 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.735383 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.735399 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.735419 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.735431 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:56Z","lastTransitionTime":"2026-01-28T15:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.838089 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.838153 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.838172 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.838227 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.838245 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:56Z","lastTransitionTime":"2026-01-28T15:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.941325 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.941388 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.941412 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.941441 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:56 crc kubenswrapper[4981]: I0128 15:04:56.941462 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:56Z","lastTransitionTime":"2026-01-28T15:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.044142 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.044282 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.044310 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.044337 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.044358 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:57Z","lastTransitionTime":"2026-01-28T15:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.147305 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.147345 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.147353 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.147367 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.147377 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:57Z","lastTransitionTime":"2026-01-28T15:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.202699 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.202740 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.202752 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.202768 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.202781 4981 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T15:04:57Z","lastTransitionTime":"2026-01-28T15:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.271752 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kfmjv" podStartSLOduration=83.271727239 podStartE2EDuration="1m23.271727239s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:04:49.894135203 +0000 UTC m=+101.346293454" watchObservedRunningTime="2026-01-28 15:04:57.271727239 +0000 UTC m=+108.723885480" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.272519 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw"] Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.273073 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.276803 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.276929 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.276809 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.277991 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.318268 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.318299 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.318361 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:57 crc kubenswrapper[4981]: E0128 15:04:57.318497 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.318569 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:57 crc kubenswrapper[4981]: E0128 15:04:57.318679 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:04:57 crc kubenswrapper[4981]: E0128 15:04:57.318777 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:57 crc kubenswrapper[4981]: E0128 15:04:57.318972 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.323515 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 02:26:13.865027201 +0000 UTC Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.323578 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.334279 4981 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.389703 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b5d72c8e-6a92-4e6d-9629-f3c1609ab65e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lbqrw\" (UID: \"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.389812 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5d72c8e-6a92-4e6d-9629-f3c1609ab65e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lbqrw\" (UID: \"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.390069 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b5d72c8e-6a92-4e6d-9629-f3c1609ab65e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lbqrw\" (UID: \"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.390171 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5d72c8e-6a92-4e6d-9629-f3c1609ab65e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lbqrw\" (UID: \"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.390269 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b5d72c8e-6a92-4e6d-9629-f3c1609ab65e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lbqrw\" (UID: \"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.491418 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5d72c8e-6a92-4e6d-9629-f3c1609ab65e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lbqrw\" (UID: \"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.491467 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b5d72c8e-6a92-4e6d-9629-f3c1609ab65e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lbqrw\" (UID: \"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.491498 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b5d72c8e-6a92-4e6d-9629-f3c1609ab65e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lbqrw\" (UID: \"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.491528 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5d72c8e-6a92-4e6d-9629-f3c1609ab65e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lbqrw\" (UID: \"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.491570 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b5d72c8e-6a92-4e6d-9629-f3c1609ab65e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lbqrw\" (UID: \"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.491634 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b5d72c8e-6a92-4e6d-9629-f3c1609ab65e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-lbqrw\" (UID: \"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.491665 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b5d72c8e-6a92-4e6d-9629-f3c1609ab65e-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-lbqrw\" (UID: \"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.492999 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b5d72c8e-6a92-4e6d-9629-f3c1609ab65e-service-ca\") pod \"cluster-version-operator-5c965bbfc6-lbqrw\" (UID: \"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.500796 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5d72c8e-6a92-4e6d-9629-f3c1609ab65e-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-lbqrw\" (UID: \"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.520470 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5d72c8e-6a92-4e6d-9629-f3c1609ab65e-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-lbqrw\" (UID: \"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:57 crc kubenswrapper[4981]: I0128 15:04:57.597009 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" Jan 28 15:04:58 crc kubenswrapper[4981]: I0128 15:04:58.260116 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" event={"ID":"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e","Type":"ContainerStarted","Data":"eac13757eb21811f44ac47c8103b6956b42354ffb9078fc37387a481d2e661c7"} Jan 28 15:04:58 crc kubenswrapper[4981]: I0128 15:04:58.265299 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" event={"ID":"b5d72c8e-6a92-4e6d-9629-f3c1609ab65e","Type":"ContainerStarted","Data":"6149e425128f184e310e6e70eafe557c09d4cd0a033f9790dc8a0d7c0397f029"} Jan 28 15:04:58 crc kubenswrapper[4981]: I0128 15:04:58.285226 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-lbqrw" podStartSLOduration=84.285171983 podStartE2EDuration="1m24.285171983s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:04:58.282422075 +0000 UTC m=+109.734580396" watchObservedRunningTime="2026-01-28 15:04:58.285171983 +0000 UTC m=+109.737330265" Jan 28 15:04:59 crc kubenswrapper[4981]: I0128 15:04:59.317839 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:04:59 crc kubenswrapper[4981]: I0128 15:04:59.317969 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:04:59 crc kubenswrapper[4981]: I0128 15:04:59.317978 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:04:59 crc kubenswrapper[4981]: E0128 15:04:59.319819 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:04:59 crc kubenswrapper[4981]: I0128 15:04:59.319868 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:04:59 crc kubenswrapper[4981]: E0128 15:04:59.320069 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:04:59 crc kubenswrapper[4981]: E0128 15:04:59.320265 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:04:59 crc kubenswrapper[4981]: E0128 15:04:59.320410 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:01 crc kubenswrapper[4981]: I0128 15:05:01.318036 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:01 crc kubenswrapper[4981]: I0128 15:05:01.318243 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:01 crc kubenswrapper[4981]: I0128 15:05:01.318395 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:01 crc kubenswrapper[4981]: E0128 15:05:01.318402 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:01 crc kubenswrapper[4981]: I0128 15:05:01.318715 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:01 crc kubenswrapper[4981]: E0128 15:05:01.319442 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:01 crc kubenswrapper[4981]: E0128 15:05:01.319476 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:01 crc kubenswrapper[4981]: E0128 15:05:01.319579 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:01 crc kubenswrapper[4981]: I0128 15:05:01.320051 4981 scope.go:117] "RemoveContainer" containerID="8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d" Jan 28 15:05:01 crc kubenswrapper[4981]: E0128 15:05:01.320358 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" Jan 28 15:05:03 crc kubenswrapper[4981]: I0128 15:05:03.318398 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:03 crc kubenswrapper[4981]: I0128 15:05:03.318441 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:03 crc kubenswrapper[4981]: I0128 15:05:03.318467 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:03 crc kubenswrapper[4981]: I0128 15:05:03.318427 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:03 crc kubenswrapper[4981]: E0128 15:05:03.318584 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:03 crc kubenswrapper[4981]: E0128 15:05:03.318657 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:03 crc kubenswrapper[4981]: E0128 15:05:03.318780 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:03 crc kubenswrapper[4981]: E0128 15:05:03.318825 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:05 crc kubenswrapper[4981]: I0128 15:05:05.317915 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:05 crc kubenswrapper[4981]: I0128 15:05:05.318082 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:05 crc kubenswrapper[4981]: I0128 15:05:05.318112 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:05 crc kubenswrapper[4981]: E0128 15:05:05.318250 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:05 crc kubenswrapper[4981]: I0128 15:05:05.318320 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:05 crc kubenswrapper[4981]: E0128 15:05:05.318483 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:05 crc kubenswrapper[4981]: E0128 15:05:05.318633 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:05 crc kubenswrapper[4981]: E0128 15:05:05.318725 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:07 crc kubenswrapper[4981]: I0128 15:05:07.317839 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:07 crc kubenswrapper[4981]: I0128 15:05:07.317903 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:07 crc kubenswrapper[4981]: I0128 15:05:07.317942 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:07 crc kubenswrapper[4981]: E0128 15:05:07.318033 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:07 crc kubenswrapper[4981]: I0128 15:05:07.318058 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:07 crc kubenswrapper[4981]: E0128 15:05:07.318268 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:07 crc kubenswrapper[4981]: E0128 15:05:07.318421 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:07 crc kubenswrapper[4981]: E0128 15:05:07.318514 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:09 crc kubenswrapper[4981]: E0128 15:05:09.255744 4981 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 28 15:05:09 crc kubenswrapper[4981]: I0128 15:05:09.299856 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lwvh4_3cd6b29e-682c-4aec-b039-70d6d75cbcbc/kube-multus/1.log" Jan 28 15:05:09 crc kubenswrapper[4981]: I0128 15:05:09.300991 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lwvh4_3cd6b29e-682c-4aec-b039-70d6d75cbcbc/kube-multus/0.log" Jan 28 15:05:09 crc kubenswrapper[4981]: I0128 15:05:09.301140 4981 generic.go:334] "Generic (PLEG): container finished" podID="3cd6b29e-682c-4aec-b039-70d6d75cbcbc" containerID="e787c9c633e01ce0e62e64cb5468c84dcf7452433437f827989301a9ef122368" exitCode=1 Jan 28 15:05:09 crc kubenswrapper[4981]: I0128 15:05:09.301258 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lwvh4" event={"ID":"3cd6b29e-682c-4aec-b039-70d6d75cbcbc","Type":"ContainerDied","Data":"e787c9c633e01ce0e62e64cb5468c84dcf7452433437f827989301a9ef122368"} Jan 28 15:05:09 crc kubenswrapper[4981]: I0128 15:05:09.301357 4981 scope.go:117] "RemoveContainer" containerID="1d55c8443b8f4985f462b2475250d7957006a083aacb121d253f90440f229b0c" Jan 28 15:05:09 crc kubenswrapper[4981]: I0128 15:05:09.302142 4981 scope.go:117] "RemoveContainer" containerID="e787c9c633e01ce0e62e64cb5468c84dcf7452433437f827989301a9ef122368" Jan 28 15:05:09 crc kubenswrapper[4981]: E0128 15:05:09.303232 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lwvh4_openshift-multus(3cd6b29e-682c-4aec-b039-70d6d75cbcbc)\"" pod="openshift-multus/multus-lwvh4" podUID="3cd6b29e-682c-4aec-b039-70d6d75cbcbc" Jan 28 15:05:09 crc kubenswrapper[4981]: I0128 15:05:09.317763 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:09 crc kubenswrapper[4981]: I0128 15:05:09.317920 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:09 crc kubenswrapper[4981]: I0128 15:05:09.318158 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:09 crc kubenswrapper[4981]: E0128 15:05:09.318319 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:09 crc kubenswrapper[4981]: I0128 15:05:09.318372 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:09 crc kubenswrapper[4981]: E0128 15:05:09.318769 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:09 crc kubenswrapper[4981]: E0128 15:05:09.318643 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:09 crc kubenswrapper[4981]: E0128 15:05:09.318519 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:09 crc kubenswrapper[4981]: E0128 15:05:09.916836 4981 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:05:10 crc kubenswrapper[4981]: I0128 15:05:10.308648 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lwvh4_3cd6b29e-682c-4aec-b039-70d6d75cbcbc/kube-multus/1.log" Jan 28 15:05:11 crc kubenswrapper[4981]: I0128 15:05:11.318345 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:11 crc kubenswrapper[4981]: I0128 15:05:11.318393 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:11 crc kubenswrapper[4981]: E0128 15:05:11.318569 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:11 crc kubenswrapper[4981]: I0128 15:05:11.318665 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:11 crc kubenswrapper[4981]: E0128 15:05:11.319797 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:11 crc kubenswrapper[4981]: I0128 15:05:11.319838 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:11 crc kubenswrapper[4981]: E0128 15:05:11.319977 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:11 crc kubenswrapper[4981]: E0128 15:05:11.320091 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:13 crc kubenswrapper[4981]: I0128 15:05:13.317626 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:13 crc kubenswrapper[4981]: I0128 15:05:13.317686 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:13 crc kubenswrapper[4981]: E0128 15:05:13.317758 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:13 crc kubenswrapper[4981]: I0128 15:05:13.317614 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:13 crc kubenswrapper[4981]: I0128 15:05:13.317982 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:13 crc kubenswrapper[4981]: E0128 15:05:13.318001 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:13 crc kubenswrapper[4981]: E0128 15:05:13.318060 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:13 crc kubenswrapper[4981]: E0128 15:05:13.318331 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:14 crc kubenswrapper[4981]: I0128 15:05:14.320737 4981 scope.go:117] "RemoveContainer" containerID="8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d" Jan 28 15:05:14 crc kubenswrapper[4981]: E0128 15:05:14.321226 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-2ss7x_openshift-ovn-kubernetes(cbdbd481-8604-433f-823e-d77a8b8517a8)\"" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" Jan 28 15:05:14 crc kubenswrapper[4981]: E0128 15:05:14.918126 4981 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:05:15 crc kubenswrapper[4981]: I0128 15:05:15.318047 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:15 crc kubenswrapper[4981]: I0128 15:05:15.318080 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:15 crc kubenswrapper[4981]: E0128 15:05:15.318179 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:15 crc kubenswrapper[4981]: E0128 15:05:15.318469 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:15 crc kubenswrapper[4981]: I0128 15:05:15.318550 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:15 crc kubenswrapper[4981]: I0128 15:05:15.318617 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:15 crc kubenswrapper[4981]: E0128 15:05:15.318796 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:15 crc kubenswrapper[4981]: E0128 15:05:15.318936 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:17 crc kubenswrapper[4981]: I0128 15:05:17.318313 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:17 crc kubenswrapper[4981]: I0128 15:05:17.318444 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:17 crc kubenswrapper[4981]: I0128 15:05:17.318445 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:17 crc kubenswrapper[4981]: I0128 15:05:17.318442 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:17 crc kubenswrapper[4981]: E0128 15:05:17.318565 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:17 crc kubenswrapper[4981]: E0128 15:05:17.318678 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:17 crc kubenswrapper[4981]: E0128 15:05:17.318887 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:17 crc kubenswrapper[4981]: E0128 15:05:17.319062 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:19 crc kubenswrapper[4981]: I0128 15:05:19.318439 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:19 crc kubenswrapper[4981]: I0128 15:05:19.318481 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:19 crc kubenswrapper[4981]: I0128 15:05:19.318528 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:19 crc kubenswrapper[4981]: I0128 15:05:19.318585 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:19 crc kubenswrapper[4981]: E0128 15:05:19.320179 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:19 crc kubenswrapper[4981]: E0128 15:05:19.320275 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:19 crc kubenswrapper[4981]: E0128 15:05:19.320381 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:19 crc kubenswrapper[4981]: E0128 15:05:19.320651 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:19 crc kubenswrapper[4981]: E0128 15:05:19.919554 4981 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:05:21 crc kubenswrapper[4981]: I0128 15:05:21.318129 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:21 crc kubenswrapper[4981]: I0128 15:05:21.318224 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:21 crc kubenswrapper[4981]: I0128 15:05:21.318313 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:21 crc kubenswrapper[4981]: E0128 15:05:21.318432 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:21 crc kubenswrapper[4981]: I0128 15:05:21.318479 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:21 crc kubenswrapper[4981]: E0128 15:05:21.318670 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:21 crc kubenswrapper[4981]: E0128 15:05:21.318789 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:21 crc kubenswrapper[4981]: E0128 15:05:21.319163 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:21 crc kubenswrapper[4981]: I0128 15:05:21.319517 4981 scope.go:117] "RemoveContainer" containerID="e787c9c633e01ce0e62e64cb5468c84dcf7452433437f827989301a9ef122368" Jan 28 15:05:22 crc kubenswrapper[4981]: I0128 15:05:22.363169 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lwvh4_3cd6b29e-682c-4aec-b039-70d6d75cbcbc/kube-multus/1.log" Jan 28 15:05:22 crc kubenswrapper[4981]: I0128 15:05:22.363670 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lwvh4" event={"ID":"3cd6b29e-682c-4aec-b039-70d6d75cbcbc","Type":"ContainerStarted","Data":"f303c909c2291ab319ea84a75c816fa5be8eb7515cc7e5cfd1c2bb7a8fd74c8b"} Jan 28 15:05:23 crc kubenswrapper[4981]: I0128 15:05:23.317997 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:23 crc kubenswrapper[4981]: I0128 15:05:23.318055 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:23 crc kubenswrapper[4981]: I0128 15:05:23.318081 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:23 crc kubenswrapper[4981]: E0128 15:05:23.318266 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:23 crc kubenswrapper[4981]: I0128 15:05:23.318315 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:23 crc kubenswrapper[4981]: E0128 15:05:23.318535 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:23 crc kubenswrapper[4981]: E0128 15:05:23.318727 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:23 crc kubenswrapper[4981]: E0128 15:05:23.319111 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:24 crc kubenswrapper[4981]: E0128 15:05:24.921349 4981 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:05:25 crc kubenswrapper[4981]: I0128 15:05:25.318779 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:25 crc kubenswrapper[4981]: I0128 15:05:25.318845 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:25 crc kubenswrapper[4981]: E0128 15:05:25.318973 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:25 crc kubenswrapper[4981]: I0128 15:05:25.319028 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:25 crc kubenswrapper[4981]: I0128 15:05:25.319062 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:25 crc kubenswrapper[4981]: E0128 15:05:25.319156 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:25 crc kubenswrapper[4981]: E0128 15:05:25.319255 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:25 crc kubenswrapper[4981]: E0128 15:05:25.319307 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:27 crc kubenswrapper[4981]: I0128 15:05:27.318840 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:27 crc kubenswrapper[4981]: I0128 15:05:27.319084 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:27 crc kubenswrapper[4981]: I0128 15:05:27.319695 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:27 crc kubenswrapper[4981]: I0128 15:05:27.320367 4981 scope.go:117] "RemoveContainer" containerID="8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d" Jan 28 15:05:27 crc kubenswrapper[4981]: E0128 15:05:27.320771 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:27 crc kubenswrapper[4981]: E0128 15:05:27.320820 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:27 crc kubenswrapper[4981]: E0128 15:05:27.320877 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:27 crc kubenswrapper[4981]: I0128 15:05:27.318783 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:27 crc kubenswrapper[4981]: E0128 15:05:27.321400 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:28 crc kubenswrapper[4981]: I0128 15:05:28.320902 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8rsts"] Jan 28 15:05:28 crc kubenswrapper[4981]: I0128 15:05:28.321135 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:28 crc kubenswrapper[4981]: E0128 15:05:28.321380 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:28 crc kubenswrapper[4981]: I0128 15:05:28.391792 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovnkube-controller/3.log" Jan 28 15:05:28 crc kubenswrapper[4981]: I0128 15:05:28.395031 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerStarted","Data":"bf005f62b2bbce81022a546c73d4104d001f145013a9720a31ce265c0c40b9ca"} Jan 28 15:05:28 crc kubenswrapper[4981]: I0128 15:05:28.397368 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:05:28 crc kubenswrapper[4981]: I0128 15:05:28.435649 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podStartSLOduration=114.435618044 podStartE2EDuration="1m54.435618044s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:28.434099416 +0000 UTC m=+139.886257687" watchObservedRunningTime="2026-01-28 15:05:28.435618044 +0000 UTC m=+139.887776325" Jan 28 15:05:29 crc kubenswrapper[4981]: I0128 15:05:29.317748 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:29 crc kubenswrapper[4981]: I0128 15:05:29.317791 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:29 crc kubenswrapper[4981]: I0128 15:05:29.317748 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:29 crc kubenswrapper[4981]: E0128 15:05:29.318874 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:29 crc kubenswrapper[4981]: E0128 15:05:29.318987 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:29 crc kubenswrapper[4981]: E0128 15:05:29.319091 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:29 crc kubenswrapper[4981]: E0128 15:05:29.922511 4981 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 15:05:30 crc kubenswrapper[4981]: I0128 15:05:30.318036 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:30 crc kubenswrapper[4981]: E0128 15:05:30.318298 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:31 crc kubenswrapper[4981]: I0128 15:05:31.318881 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:31 crc kubenswrapper[4981]: E0128 15:05:31.319404 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:31 crc kubenswrapper[4981]: I0128 15:05:31.318994 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:31 crc kubenswrapper[4981]: I0128 15:05:31.318987 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:31 crc kubenswrapper[4981]: E0128 15:05:31.319751 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:31 crc kubenswrapper[4981]: E0128 15:05:31.319913 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:32 crc kubenswrapper[4981]: I0128 15:05:32.318284 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:32 crc kubenswrapper[4981]: E0128 15:05:32.318517 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:33 crc kubenswrapper[4981]: I0128 15:05:33.318042 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:33 crc kubenswrapper[4981]: I0128 15:05:33.318085 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:33 crc kubenswrapper[4981]: I0128 15:05:33.318240 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:33 crc kubenswrapper[4981]: E0128 15:05:33.319503 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 15:05:33 crc kubenswrapper[4981]: E0128 15:05:33.319623 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 15:05:33 crc kubenswrapper[4981]: E0128 15:05:33.319711 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 15:05:34 crc kubenswrapper[4981]: I0128 15:05:34.318594 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:34 crc kubenswrapper[4981]: E0128 15:05:34.318792 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-8rsts" podUID="d5fda60c-a87b-4810-81df-4c7717d34ac1" Jan 28 15:05:35 crc kubenswrapper[4981]: I0128 15:05:35.318437 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:35 crc kubenswrapper[4981]: I0128 15:05:35.318507 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:35 crc kubenswrapper[4981]: I0128 15:05:35.318466 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:35 crc kubenswrapper[4981]: I0128 15:05:35.321632 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 15:05:35 crc kubenswrapper[4981]: I0128 15:05:35.322009 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 15:05:35 crc kubenswrapper[4981]: I0128 15:05:35.322872 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 15:05:35 crc kubenswrapper[4981]: I0128 15:05:35.323416 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 15:05:36 crc kubenswrapper[4981]: I0128 15:05:36.317863 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:36 crc kubenswrapper[4981]: I0128 15:05:36.320817 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 15:05:36 crc kubenswrapper[4981]: I0128 15:05:36.321560 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 15:05:37 crc kubenswrapper[4981]: I0128 15:05:37.503003 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:37 crc kubenswrapper[4981]: E0128 15:05:37.503335 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:07:39.503289663 +0000 UTC m=+270.955447904 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:37 crc kubenswrapper[4981]: I0128 15:05:37.503415 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:37 crc kubenswrapper[4981]: I0128 15:05:37.503479 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:37 crc kubenswrapper[4981]: I0128 15:05:37.505252 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:37 crc kubenswrapper[4981]: I0128 15:05:37.515020 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:37 crc kubenswrapper[4981]: I0128 15:05:37.604043 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:37 crc kubenswrapper[4981]: I0128 15:05:37.604429 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:37 crc kubenswrapper[4981]: I0128 15:05:37.608604 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:37 crc kubenswrapper[4981]: I0128 15:05:37.612142 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:37 crc kubenswrapper[4981]: I0128 15:05:37.748142 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:37 crc kubenswrapper[4981]: I0128 15:05:37.758813 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 15:05:37 crc kubenswrapper[4981]: I0128 15:05:37.770080 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.023645 4981 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.075154 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rb46f"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.076138 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8dqm"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.076825 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8dqm" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.077233 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.079529 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-xq5cv"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.079817 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xq5cv" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.086334 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-vc85q"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.086941 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.086939 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.087689 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.087797 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.089558 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.094034 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.094575 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6xthx"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.094891 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.095403 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.097343 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.097654 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.097838 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.098125 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.098282 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.098455 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.098619 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.098842 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.098885 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.098928 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.099001 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.123984 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.124487 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.124886 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.127691 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.130331 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.130536 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.131039 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.131295 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lm8cf"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.131470 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.131653 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.131820 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.123971 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.124021 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.124102 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.124378 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.124436 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.160845 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.161232 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.161239 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.161448 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.161387 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.161707 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.161880 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.162085 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.162239 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.162342 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.163051 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.164625 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.166260 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.166566 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.166896 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8md5t"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.168630 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.169049 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.169379 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.171812 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.171903 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.172003 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.172304 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.172445 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.172642 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.174303 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.176221 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.177321 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc28t"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.178631 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.178777 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.178903 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.180905 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.184764 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bj272"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.185568 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.185620 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.186224 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.202151 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.203239 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vr269"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.203404 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.203767 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rb46f"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.203874 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.204039 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xq5cv"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.204451 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.204706 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.207506 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.212917 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.213125 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.213290 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.217295 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.217435 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.217643 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.217753 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.217925 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.218166 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.218533 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.218717 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.220467 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8dqm"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.223940 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhsb9\" (UniqueName: \"kubernetes.io/projected/7a295e4c-ed2b-4d54-8b74-2901caa05143-kube-api-access-zhsb9\") pod \"downloads-7954f5f757-xq5cv\" (UID: \"7a295e4c-ed2b-4d54-8b74-2901caa05143\") " pod="openshift-console/downloads-7954f5f757-xq5cv" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.223984 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28197d7d-0557-4f1b-822c-2de0acf2e094-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rv5bz\" (UID: \"28197d7d-0557-4f1b-822c-2de0acf2e094\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224011 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-oauth-config\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224038 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bc7864e-dc24-4885-b829-e9ee56d0bb2a-config\") pod \"machine-api-operator-5694c8668f-rb46f\" (UID: \"7bc7864e-dc24-4885-b829-e9ee56d0bb2a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224056 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d0e5f83-447a-4e16-a09a-3b28ecc8726f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t8dqm\" (UID: \"1d0e5f83-447a-4e16-a09a-3b28ecc8726f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8dqm" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224078 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-serving-cert\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224271 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-serving-cert\") pod \"route-controller-manager-6576b87f9c-zsq6j\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224295 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-oauth-serving-cert\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224328 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/161e9b26-1b52-43e8-90a1-5dae906eec38-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224352 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c44ff9c-f099-4ece-9c89-9b3d8e6e1212-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6xthx\" (UID: \"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224369 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/161e9b26-1b52-43e8-90a1-5dae906eec38-audit-policies\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224386 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ab58fe84-53f7-4d26-9606-364d442d40d2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zkh5j\" (UID: \"ab58fe84-53f7-4d26-9606-364d442d40d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224404 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c44ff9c-f099-4ece-9c89-9b3d8e6e1212-service-ca-bundle\") pod \"authentication-operator-69f744f599-6xthx\" (UID: \"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224422 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtt5r\" (UniqueName: \"kubernetes.io/projected/5c29d863-f1a8-42dc-8916-988d6d45f3d9-kube-api-access-qtt5r\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224443 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrpqz\" (UniqueName: \"kubernetes.io/projected/28197d7d-0557-4f1b-822c-2de0acf2e094-kube-api-access-wrpqz\") pod \"openshift-apiserver-operator-796bbdcf4f-rv5bz\" (UID: \"28197d7d-0557-4f1b-822c-2de0acf2e094\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224461 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5v6h\" (UniqueName: \"kubernetes.io/projected/1d0e5f83-447a-4e16-a09a-3b28ecc8726f-kube-api-access-j5v6h\") pod \"cluster-samples-operator-665b6dd947-t8dqm\" (UID: \"1d0e5f83-447a-4e16-a09a-3b28ecc8726f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8dqm" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224477 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bzhr\" (UniqueName: \"kubernetes.io/projected/ab58fe84-53f7-4d26-9606-364d442d40d2-kube-api-access-9bzhr\") pod \"openshift-config-operator-7777fb866f-zkh5j\" (UID: \"ab58fe84-53f7-4d26-9606-364d442d40d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224507 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7bc7864e-dc24-4885-b829-e9ee56d0bb2a-images\") pod \"machine-api-operator-5694c8668f-rb46f\" (UID: \"7bc7864e-dc24-4885-b829-e9ee56d0bb2a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224525 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/161e9b26-1b52-43e8-90a1-5dae906eec38-encryption-config\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224551 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c44ff9c-f099-4ece-9c89-9b3d8e6e1212-config\") pod \"authentication-operator-69f744f599-6xthx\" (UID: \"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224577 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/161e9b26-1b52-43e8-90a1-5dae906eec38-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224601 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-client-ca\") pod \"route-controller-manager-6576b87f9c-zsq6j\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224620 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-config\") pod \"route-controller-manager-6576b87f9c-zsq6j\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224641 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4hnl\" (UniqueName: \"kubernetes.io/projected/7c44ff9c-f099-4ece-9c89-9b3d8e6e1212-kube-api-access-v4hnl\") pod \"authentication-operator-69f744f599-6xthx\" (UID: \"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224665 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/161e9b26-1b52-43e8-90a1-5dae906eec38-audit-dir\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224689 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7bc7864e-dc24-4885-b829-e9ee56d0bb2a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rb46f\" (UID: \"7bc7864e-dc24-4885-b829-e9ee56d0bb2a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224706 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28197d7d-0557-4f1b-822c-2de0acf2e094-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rv5bz\" (UID: \"28197d7d-0557-4f1b-822c-2de0acf2e094\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224721 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/161e9b26-1b52-43e8-90a1-5dae906eec38-serving-cert\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224742 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-trusted-ca-bundle\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224767 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab58fe84-53f7-4d26-9606-364d442d40d2-serving-cert\") pod \"openshift-config-operator-7777fb866f-zkh5j\" (UID: \"ab58fe84-53f7-4d26-9606-364d442d40d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224793 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-service-ca\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224813 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdgn6\" (UniqueName: \"kubernetes.io/projected/7bc7864e-dc24-4885-b829-e9ee56d0bb2a-kube-api-access-zdgn6\") pod \"machine-api-operator-5694c8668f-rb46f\" (UID: \"7bc7864e-dc24-4885-b829-e9ee56d0bb2a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224834 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c44ff9c-f099-4ece-9c89-9b3d8e6e1212-serving-cert\") pod \"authentication-operator-69f744f599-6xthx\" (UID: \"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224857 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/161e9b26-1b52-43e8-90a1-5dae906eec38-etcd-client\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224880 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scg64\" (UniqueName: \"kubernetes.io/projected/161e9b26-1b52-43e8-90a1-5dae906eec38-kube-api-access-scg64\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224903 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56ksn\" (UniqueName: \"kubernetes.io/projected/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-kube-api-access-56ksn\") pod \"route-controller-manager-6576b87f9c-zsq6j\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.224925 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-config\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.232867 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.233069 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.233171 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.233279 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.233685 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-v9xj2"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.234749 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.234919 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.235023 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.235105 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.235266 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.237598 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v9xj2" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.237978 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cl8rz"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.238035 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.238054 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.238209 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.238310 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.238317 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.244986 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.245307 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.248308 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.248508 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.248717 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.248892 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.251138 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.251604 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.251974 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.252286 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.252548 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.252818 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.252978 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.253805 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.254158 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.256011 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.256165 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.256405 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.257026 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.257148 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.257243 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.259809 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cl8rz" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.261709 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.261966 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.274469 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.276786 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.282750 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tml6p"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.284271 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.284869 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.285158 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.285673 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.286033 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.286626 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.287046 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.287245 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.287350 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.287594 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.287651 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.287980 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.292931 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.293573 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.293973 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.300856 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.301934 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-bnsn8"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.302237 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.302525 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nxh5j"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.303411 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.303442 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.304295 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.304298 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nxh5j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.304632 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.305067 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.305769 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.306030 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.306521 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.307361 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x2wjc"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.308412 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nskbs"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.309103 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.309230 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.309547 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x2wjc" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.320705 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vc85q"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.320854 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8md5t"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.326488 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c44ff9c-f099-4ece-9c89-9b3d8e6e1212-service-ca-bundle\") pod \"authentication-operator-69f744f599-6xthx\" (UID: \"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.326540 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-audit-policies\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.326579 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8rbk\" (UniqueName: \"kubernetes.io/projected/ea96a27d-8cce-4f3a-b634-97c6c0693dfb-kube-api-access-c8rbk\") pod \"migrator-59844c95c7-v9xj2\" (UID: \"ea96a27d-8cce-4f3a-b634-97c6c0693dfb\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v9xj2" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.326608 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtt5r\" (UniqueName: \"kubernetes.io/projected/5c29d863-f1a8-42dc-8916-988d6d45f3d9-kube-api-access-qtt5r\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.326638 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.326678 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrpqz\" (UniqueName: \"kubernetes.io/projected/28197d7d-0557-4f1b-822c-2de0acf2e094-kube-api-access-wrpqz\") pod \"openshift-apiserver-operator-796bbdcf4f-rv5bz\" (UID: \"28197d7d-0557-4f1b-822c-2de0acf2e094\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.326710 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.326747 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5v6h\" (UniqueName: \"kubernetes.io/projected/1d0e5f83-447a-4e16-a09a-3b28ecc8726f-kube-api-access-j5v6h\") pod \"cluster-samples-operator-665b6dd947-t8dqm\" (UID: \"1d0e5f83-447a-4e16-a09a-3b28ecc8726f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8dqm" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.326779 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bzhr\" (UniqueName: \"kubernetes.io/projected/ab58fe84-53f7-4d26-9606-364d442d40d2-kube-api-access-9bzhr\") pod \"openshift-config-operator-7777fb866f-zkh5j\" (UID: \"ab58fe84-53f7-4d26-9606-364d442d40d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.326809 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7-trusted-ca\") pod \"console-operator-58897d9998-8md5t\" (UID: \"5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7\") " pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.326846 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7bc7864e-dc24-4885-b829-e9ee56d0bb2a-images\") pod \"machine-api-operator-5694c8668f-rb46f\" (UID: \"7bc7864e-dc24-4885-b829-e9ee56d0bb2a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.326877 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/161e9b26-1b52-43e8-90a1-5dae906eec38-encryption-config\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.326937 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.326969 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c44ff9c-f099-4ece-9c89-9b3d8e6e1212-config\") pod \"authentication-operator-69f744f599-6xthx\" (UID: \"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327000 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/161e9b26-1b52-43e8-90a1-5dae906eec38-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327031 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-client-ca\") pod \"route-controller-manager-6576b87f9c-zsq6j\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327062 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327099 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g597\" (UniqueName: \"kubernetes.io/projected/319c2ac2-dec9-4935-ae29-bc9b663d9820-kube-api-access-2g597\") pod \"machine-approver-56656f9798-gchkj\" (UID: \"319c2ac2-dec9-4935-ae29-bc9b663d9820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327126 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-config\") pod \"route-controller-manager-6576b87f9c-zsq6j\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327158 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4hnl\" (UniqueName: \"kubernetes.io/projected/7c44ff9c-f099-4ece-9c89-9b3d8e6e1212-kube-api-access-v4hnl\") pod \"authentication-operator-69f744f599-6xthx\" (UID: \"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327211 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/161e9b26-1b52-43e8-90a1-5dae906eec38-audit-dir\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327250 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7bc7864e-dc24-4885-b829-e9ee56d0bb2a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rb46f\" (UID: \"7bc7864e-dc24-4885-b829-e9ee56d0bb2a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327277 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28197d7d-0557-4f1b-822c-2de0acf2e094-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rv5bz\" (UID: \"28197d7d-0557-4f1b-822c-2de0acf2e094\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327309 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/161e9b26-1b52-43e8-90a1-5dae906eec38-serving-cert\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327338 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-trusted-ca-bundle\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327368 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327393 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p6lk\" (UniqueName: \"kubernetes.io/projected/200de941-a8aa-4930-a959-553869b8a2d0-kube-api-access-8p6lk\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327429 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab58fe84-53f7-4d26-9606-364d442d40d2-serving-cert\") pod \"openshift-config-operator-7777fb866f-zkh5j\" (UID: \"ab58fe84-53f7-4d26-9606-364d442d40d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327461 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327494 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-service-ca\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327530 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327565 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdgn6\" (UniqueName: \"kubernetes.io/projected/7bc7864e-dc24-4885-b829-e9ee56d0bb2a-kube-api-access-zdgn6\") pod \"machine-api-operator-5694c8668f-rb46f\" (UID: \"7bc7864e-dc24-4885-b829-e9ee56d0bb2a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327603 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c44ff9c-f099-4ece-9c89-9b3d8e6e1212-serving-cert\") pod \"authentication-operator-69f744f599-6xthx\" (UID: \"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327634 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/161e9b26-1b52-43e8-90a1-5dae906eec38-etcd-client\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327670 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scg64\" (UniqueName: \"kubernetes.io/projected/161e9b26-1b52-43e8-90a1-5dae906eec38-kube-api-access-scg64\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327722 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56ksn\" (UniqueName: \"kubernetes.io/projected/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-kube-api-access-56ksn\") pod \"route-controller-manager-6576b87f9c-zsq6j\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327756 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-config\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327790 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/319c2ac2-dec9-4935-ae29-bc9b663d9820-auth-proxy-config\") pod \"machine-approver-56656f9798-gchkj\" (UID: \"319c2ac2-dec9-4935-ae29-bc9b663d9820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327824 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/319c2ac2-dec9-4935-ae29-bc9b663d9820-config\") pod \"machine-approver-56656f9798-gchkj\" (UID: \"319c2ac2-dec9-4935-ae29-bc9b663d9820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327849 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6ff8d77-eccc-4485-bca4-04baf87fb060-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-98q6g\" (UID: \"d6ff8d77-eccc-4485-bca4-04baf87fb060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327886 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhsb9\" (UniqueName: \"kubernetes.io/projected/7a295e4c-ed2b-4d54-8b74-2901caa05143-kube-api-access-zhsb9\") pod \"downloads-7954f5f757-xq5cv\" (UID: \"7a295e4c-ed2b-4d54-8b74-2901caa05143\") " pod="openshift-console/downloads-7954f5f757-xq5cv" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327918 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/200de941-a8aa-4930-a959-553869b8a2d0-audit-dir\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327949 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7-config\") pod \"console-operator-58897d9998-8md5t\" (UID: \"5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7\") " pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.327996 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/319c2ac2-dec9-4935-ae29-bc9b663d9820-machine-approver-tls\") pod \"machine-approver-56656f9798-gchkj\" (UID: \"319c2ac2-dec9-4935-ae29-bc9b663d9820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328025 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx8fk\" (UniqueName: \"kubernetes.io/projected/5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7-kube-api-access-xx8fk\") pod \"console-operator-58897d9998-8md5t\" (UID: \"5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7\") " pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328053 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c44ff9c-f099-4ece-9c89-9b3d8e6e1212-service-ca-bundle\") pod \"authentication-operator-69f744f599-6xthx\" (UID: \"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328057 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6ff8d77-eccc-4485-bca4-04baf87fb060-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-98q6g\" (UID: \"d6ff8d77-eccc-4485-bca4-04baf87fb060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328505 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28197d7d-0557-4f1b-822c-2de0acf2e094-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rv5bz\" (UID: \"28197d7d-0557-4f1b-822c-2de0acf2e094\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328536 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328565 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-oauth-config\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328588 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmxxs\" (UniqueName: \"kubernetes.io/projected/d6ff8d77-eccc-4485-bca4-04baf87fb060-kube-api-access-pmxxs\") pod \"cluster-image-registry-operator-dc59b4c8b-98q6g\" (UID: \"d6ff8d77-eccc-4485-bca4-04baf87fb060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328637 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7-serving-cert\") pod \"console-operator-58897d9998-8md5t\" (UID: \"5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7\") " pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328669 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bc7864e-dc24-4885-b829-e9ee56d0bb2a-config\") pod \"machine-api-operator-5694c8668f-rb46f\" (UID: \"7bc7864e-dc24-4885-b829-e9ee56d0bb2a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328729 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d0e5f83-447a-4e16-a09a-3b28ecc8726f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t8dqm\" (UID: \"1d0e5f83-447a-4e16-a09a-3b28ecc8726f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8dqm" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328755 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-serving-cert\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328816 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-serving-cert\") pod \"route-controller-manager-6576b87f9c-zsq6j\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328866 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-oauth-serving-cert\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328902 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328949 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.328979 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6ff8d77-eccc-4485-bca4-04baf87fb060-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-98q6g\" (UID: \"d6ff8d77-eccc-4485-bca4-04baf87fb060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.329019 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.329048 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/161e9b26-1b52-43e8-90a1-5dae906eec38-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.329073 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c44ff9c-f099-4ece-9c89-9b3d8e6e1212-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6xthx\" (UID: \"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.329075 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7bc7864e-dc24-4885-b829-e9ee56d0bb2a-images\") pod \"machine-api-operator-5694c8668f-rb46f\" (UID: \"7bc7864e-dc24-4885-b829-e9ee56d0bb2a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.329117 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/161e9b26-1b52-43e8-90a1-5dae906eec38-audit-policies\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.329139 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ab58fe84-53f7-4d26-9606-364d442d40d2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zkh5j\" (UID: \"ab58fe84-53f7-4d26-9606-364d442d40d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.329582 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/ab58fe84-53f7-4d26-9606-364d442d40d2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-zkh5j\" (UID: \"ab58fe84-53f7-4d26-9606-364d442d40d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.329530 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-config\") pod \"route-controller-manager-6576b87f9c-zsq6j\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.329779 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/161e9b26-1b52-43e8-90a1-5dae906eec38-audit-dir\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.330028 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.331624 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bc7864e-dc24-4885-b829-e9ee56d0bb2a-config\") pod \"machine-api-operator-5694c8668f-rb46f\" (UID: \"7bc7864e-dc24-4885-b829-e9ee56d0bb2a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.333806 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.335835 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/161e9b26-1b52-43e8-90a1-5dae906eec38-encryption-config\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.335959 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/161e9b26-1b52-43e8-90a1-5dae906eec38-audit-policies\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.336055 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/161e9b26-1b52-43e8-90a1-5dae906eec38-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.336219 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-config\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.337121 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28197d7d-0557-4f1b-822c-2de0acf2e094-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-rv5bz\" (UID: \"28197d7d-0557-4f1b-822c-2de0acf2e094\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.337838 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c44ff9c-f099-4ece-9c89-9b3d8e6e1212-config\") pod \"authentication-operator-69f744f599-6xthx\" (UID: \"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.338027 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-oauth-serving-cert\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.339335 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-service-ca\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.339971 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/161e9b26-1b52-43e8-90a1-5dae906eec38-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.340540 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/7bc7864e-dc24-4885-b829-e9ee56d0bb2a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-rb46f\" (UID: \"7bc7864e-dc24-4885-b829-e9ee56d0bb2a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.340799 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-serving-cert\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.340974 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lm8cf"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.341021 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.342464 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bj272"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.344364 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d0e5f83-447a-4e16-a09a-3b28ecc8726f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-t8dqm\" (UID: \"1d0e5f83-447a-4e16-a09a-3b28ecc8726f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8dqm" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.344730 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/161e9b26-1b52-43e8-90a1-5dae906eec38-serving-cert\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.344729 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab58fe84-53f7-4d26-9606-364d442d40d2-serving-cert\") pod \"openshift-config-operator-7777fb866f-zkh5j\" (UID: \"ab58fe84-53f7-4d26-9606-364d442d40d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.345045 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-serving-cert\") pod \"route-controller-manager-6576b87f9c-zsq6j\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.345263 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c44ff9c-f099-4ece-9c89-9b3d8e6e1212-serving-cert\") pod \"authentication-operator-69f744f599-6xthx\" (UID: \"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.346616 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/161e9b26-1b52-43e8-90a1-5dae906eec38-etcd-client\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.346823 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.347319 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-client-ca\") pod \"route-controller-manager-6576b87f9c-zsq6j\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.347635 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c44ff9c-f099-4ece-9c89-9b3d8e6e1212-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6xthx\" (UID: \"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.347803 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.348414 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-oauth-config\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.348707 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28197d7d-0557-4f1b-822c-2de0acf2e094-config\") pod \"openshift-apiserver-operator-796bbdcf4f-rv5bz\" (UID: \"28197d7d-0557-4f1b-822c-2de0acf2e094\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.348865 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.349238 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-trusted-ca-bundle\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.349822 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.350306 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.350544 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.350950 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.352083 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.352237 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.353131 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.353702 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f495f"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.354653 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f495f" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.357859 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.358871 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-74zjr"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.359930 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-74zjr" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.360963 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.362745 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.364138 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.367690 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cl8rz"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.367811 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.368479 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-v9xj2"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.369507 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.370631 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc28t"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.371776 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.373032 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6xthx"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.374037 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nskbs"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.375296 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.376403 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x2wjc"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.377719 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.379722 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.381148 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.381401 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.382784 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tml6p"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.383927 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.384954 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.387084 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nxh5j"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.388350 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vr269"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.389388 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.390758 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.392001 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f495f"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.392882 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qdt5k"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.394130 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.395424 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.396779 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-74zjr"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.399680 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.399958 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.403032 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qdt5k"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.405313 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-ndxj2"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.406433 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-xp9h9"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.406622 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ndxj2" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.407114 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xp9h9" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.409937 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xp9h9"] Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.422446 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.430221 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p6lk\" (UniqueName: \"kubernetes.io/projected/200de941-a8aa-4930-a959-553869b8a2d0-kube-api-access-8p6lk\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.430327 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.430410 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.430481 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.430573 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/319c2ac2-dec9-4935-ae29-bc9b663d9820-auth-proxy-config\") pod \"machine-approver-56656f9798-gchkj\" (UID: \"319c2ac2-dec9-4935-ae29-bc9b663d9820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.430649 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/319c2ac2-dec9-4935-ae29-bc9b663d9820-config\") pod \"machine-approver-56656f9798-gchkj\" (UID: \"319c2ac2-dec9-4935-ae29-bc9b663d9820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.430716 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6ff8d77-eccc-4485-bca4-04baf87fb060-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-98q6g\" (UID: \"d6ff8d77-eccc-4485-bca4-04baf87fb060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.430794 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/200de941-a8aa-4930-a959-553869b8a2d0-audit-dir\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.430865 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7-config\") pod \"console-operator-58897d9998-8md5t\" (UID: \"5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7\") " pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.430939 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/319c2ac2-dec9-4935-ae29-bc9b663d9820-machine-approver-tls\") pod \"machine-approver-56656f9798-gchkj\" (UID: \"319c2ac2-dec9-4935-ae29-bc9b663d9820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.431024 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx8fk\" (UniqueName: \"kubernetes.io/projected/5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7-kube-api-access-xx8fk\") pod \"console-operator-58897d9998-8md5t\" (UID: \"5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7\") " pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.431093 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6ff8d77-eccc-4485-bca4-04baf87fb060-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-98q6g\" (UID: \"d6ff8d77-eccc-4485-bca4-04baf87fb060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.431170 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.431270 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmxxs\" (UniqueName: \"kubernetes.io/projected/d6ff8d77-eccc-4485-bca4-04baf87fb060-kube-api-access-pmxxs\") pod \"cluster-image-registry-operator-dc59b4c8b-98q6g\" (UID: \"d6ff8d77-eccc-4485-bca4-04baf87fb060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.431348 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7-serving-cert\") pod \"console-operator-58897d9998-8md5t\" (UID: \"5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7\") " pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.431120 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/200de941-a8aa-4930-a959-553869b8a2d0-audit-dir\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.431483 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.431552 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.431626 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6ff8d77-eccc-4485-bca4-04baf87fb060-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-98q6g\" (UID: \"d6ff8d77-eccc-4485-bca4-04baf87fb060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.431696 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/319c2ac2-dec9-4935-ae29-bc9b663d9820-config\") pod \"machine-approver-56656f9798-gchkj\" (UID: \"319c2ac2-dec9-4935-ae29-bc9b663d9820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.431707 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.431777 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-audit-policies\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.431816 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8rbk\" (UniqueName: \"kubernetes.io/projected/ea96a27d-8cce-4f3a-b634-97c6c0693dfb-kube-api-access-c8rbk\") pod \"migrator-59844c95c7-v9xj2\" (UID: \"ea96a27d-8cce-4f3a-b634-97c6c0693dfb\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v9xj2" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.431980 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.432024 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/319c2ac2-dec9-4935-ae29-bc9b663d9820-auth-proxy-config\") pod \"machine-approver-56656f9798-gchkj\" (UID: \"319c2ac2-dec9-4935-ae29-bc9b663d9820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.432031 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.432267 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7-trusted-ca\") pod \"console-operator-58897d9998-8md5t\" (UID: \"5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7\") " pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.432348 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.432363 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.432431 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.432464 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g597\" (UniqueName: \"kubernetes.io/projected/319c2ac2-dec9-4935-ae29-bc9b663d9820-kube-api-access-2g597\") pod \"machine-approver-56656f9798-gchkj\" (UID: \"319c2ac2-dec9-4935-ae29-bc9b663d9820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.433253 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.433699 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-audit-policies\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.433859 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7-trusted-ca\") pod \"console-operator-58897d9998-8md5t\" (UID: \"5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7\") " pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.433946 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7-config\") pod \"console-operator-58897d9998-8md5t\" (UID: \"5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7\") " pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.435745 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.435807 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.436085 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.436707 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d6ff8d77-eccc-4485-bca4-04baf87fb060-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-98q6g\" (UID: \"d6ff8d77-eccc-4485-bca4-04baf87fb060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.437397 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7-serving-cert\") pod \"console-operator-58897d9998-8md5t\" (UID: \"5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7\") " pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.437553 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.438366 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.438669 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.438730 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/319c2ac2-dec9-4935-ae29-bc9b663d9820-machine-approver-tls\") pod \"machine-approver-56656f9798-gchkj\" (UID: \"319c2ac2-dec9-4935-ae29-bc9b663d9820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.438942 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.439169 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.439324 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c7438abd6a37a0e1bd2d12561128dd1cd84e2745320199a4984c8524354b8b3c"} Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.439376 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"eb4f7ab6614dba7068ee6df31439fb2140163d47a2f67c992890f123de37f069"} Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.439809 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d6ff8d77-eccc-4485-bca4-04baf87fb060-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-98q6g\" (UID: \"d6ff8d77-eccc-4485-bca4-04baf87fb060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.441013 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"78e7d429ca3b899de6c5e23e2c821cd32791fbb77ff128ad6185972c28a44e13"} Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.441075 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"03a964df89e729006018ecfe367d9caac9bffc6a632b449ba096278da4604be2"} Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.441495 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.442583 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"83b9a4c655e5b77d1ffd5d79fb8a39ed6bbbbed4400612126bd6a9b94d017ea6"} Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.442612 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d598e0f391a84e4951c0a6fd54323e4da018e2c1ac1e5508b18deed7b88cbd5e"} Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.442779 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.444290 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.480994 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.499852 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.519684 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.540813 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.561591 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.580697 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.599847 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.620730 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.640514 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.660960 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.679333 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.699824 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.720316 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.740577 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.760148 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.780609 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.800558 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.821414 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.840649 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.859643 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.879070 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.899670 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.919888 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.939741 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.959898 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.979576 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 15:05:38 crc kubenswrapper[4981]: I0128 15:05:38.999125 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.018411 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.039628 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.059702 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.091256 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.099763 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.120160 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.139936 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.159883 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.181159 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.199905 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.220074 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.239295 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.259520 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.280861 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.299556 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.317160 4981 request.go:700] Waited for 1.011063911s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.319007 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.338840 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.359977 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.382812 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.399885 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.420136 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.439725 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.459899 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.480083 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.500157 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.540169 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.542228 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.560541 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.619321 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtt5r\" (UniqueName: \"kubernetes.io/projected/5c29d863-f1a8-42dc-8916-988d6d45f3d9-kube-api-access-qtt5r\") pod \"console-f9d7485db-vc85q\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.626688 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrpqz\" (UniqueName: \"kubernetes.io/projected/28197d7d-0557-4f1b-822c-2de0acf2e094-kube-api-access-wrpqz\") pod \"openshift-apiserver-operator-796bbdcf4f-rv5bz\" (UID: \"28197d7d-0557-4f1b-822c-2de0acf2e094\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.658133 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scg64\" (UniqueName: \"kubernetes.io/projected/161e9b26-1b52-43e8-90a1-5dae906eec38-kube-api-access-scg64\") pod \"apiserver-7bbb656c7d-zn7fg\" (UID: \"161e9b26-1b52-43e8-90a1-5dae906eec38\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.661517 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5v6h\" (UniqueName: \"kubernetes.io/projected/1d0e5f83-447a-4e16-a09a-3b28ecc8726f-kube-api-access-j5v6h\") pod \"cluster-samples-operator-665b6dd947-t8dqm\" (UID: \"1d0e5f83-447a-4e16-a09a-3b28ecc8726f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8dqm" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.676779 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8dqm" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.679089 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bzhr\" (UniqueName: \"kubernetes.io/projected/ab58fe84-53f7-4d26-9606-364d442d40d2-kube-api-access-9bzhr\") pod \"openshift-config-operator-7777fb866f-zkh5j\" (UID: \"ab58fe84-53f7-4d26-9606-364d442d40d2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.696720 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4hnl\" (UniqueName: \"kubernetes.io/projected/7c44ff9c-f099-4ece-9c89-9b3d8e6e1212-kube-api-access-v4hnl\") pod \"authentication-operator-69f744f599-6xthx\" (UID: \"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.726309 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdgn6\" (UniqueName: \"kubernetes.io/projected/7bc7864e-dc24-4885-b829-e9ee56d0bb2a-kube-api-access-zdgn6\") pod \"machine-api-operator-5694c8668f-rb46f\" (UID: \"7bc7864e-dc24-4885-b829-e9ee56d0bb2a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.743763 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhsb9\" (UniqueName: \"kubernetes.io/projected/7a295e4c-ed2b-4d54-8b74-2901caa05143-kube-api-access-zhsb9\") pod \"downloads-7954f5f757-xq5cv\" (UID: \"7a295e4c-ed2b-4d54-8b74-2901caa05143\") " pod="openshift-console/downloads-7954f5f757-xq5cv" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.753820 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56ksn\" (UniqueName: \"kubernetes.io/projected/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-kube-api-access-56ksn\") pod \"route-controller-manager-6576b87f9c-zsq6j\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.759381 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.783585 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.799269 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.801997 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.806679 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.811734 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.819718 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.837545 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.839516 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.849782 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.860248 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.880112 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.887741 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.904905 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.916558 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8dqm"] Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.920741 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.945382 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.961215 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 15:05:39 crc kubenswrapper[4981]: I0128 15:05:39.980721 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.003632 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.003719 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.019624 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.034558 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xq5cv" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.039605 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.063127 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.079017 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.092461 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-vc85q"] Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.101719 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.121070 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 15:05:40 crc kubenswrapper[4981]: W0128 15:05:40.121585 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c29d863_f1a8_42dc_8916_988d6d45f3d9.slice/crio-c9e603e551d2ec7f53014326d6f18c85b0c3e79a2bd4e8a712d3543ee3bda684 WatchSource:0}: Error finding container c9e603e551d2ec7f53014326d6f18c85b0c3e79a2bd4e8a712d3543ee3bda684: Status 404 returned error can't find the container with id c9e603e551d2ec7f53014326d6f18c85b0c3e79a2bd4e8a712d3543ee3bda684 Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.160820 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.180726 4981 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.199463 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.219783 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.230967 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j"] Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.242980 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.259108 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.278521 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.300476 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.301911 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6xthx"] Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.317624 4981 request.go:700] Waited for 1.909461405s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.319479 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 15:05:40 crc kubenswrapper[4981]: W0128 15:05:40.327317 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c44ff9c_f099_4ece_9c89_9b3d8e6e1212.slice/crio-08a84223038bf70fb0c8bfddd942e700dbbc828b4647819079b1df56641e850a WatchSource:0}: Error finding container 08a84223038bf70fb0c8bfddd942e700dbbc828b4647819079b1df56641e850a: Status 404 returned error can't find the container with id 08a84223038bf70fb0c8bfddd942e700dbbc828b4647819079b1df56641e850a Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.339841 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.382589 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p6lk\" (UniqueName: \"kubernetes.io/projected/200de941-a8aa-4930-a959-553869b8a2d0-kube-api-access-8p6lk\") pod \"oauth-openshift-558db77b4-lm8cf\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.394079 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg"] Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.397424 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz"] Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.401643 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx8fk\" (UniqueName: \"kubernetes.io/projected/5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7-kube-api-access-xx8fk\") pod \"console-operator-58897d9998-8md5t\" (UID: \"5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7\") " pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.402866 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j"] Jan 28 15:05:40 crc kubenswrapper[4981]: W0128 15:05:40.404367 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28197d7d_0557_4f1b_822c_2de0acf2e094.slice/crio-3e686aa2b8c4910517d5f613ce9e2cd7f4c650832c41f5f22c4d5bbcff25842e WatchSource:0}: Error finding container 3e686aa2b8c4910517d5f613ce9e2cd7f4c650832c41f5f22c4d5bbcff25842e: Status 404 returned error can't find the container with id 3e686aa2b8c4910517d5f613ce9e2cd7f4c650832c41f5f22c4d5bbcff25842e Jan 28 15:05:40 crc kubenswrapper[4981]: W0128 15:05:40.409044 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab58fe84_53f7_4d26_9606_364d442d40d2.slice/crio-e816d3de62aa7468a315d72e2c5609ea7730957d5df07cf60d417c770b251cdd WatchSource:0}: Error finding container e816d3de62aa7468a315d72e2c5609ea7730957d5df07cf60d417c770b251cdd: Status 404 returned error can't find the container with id e816d3de62aa7468a315d72e2c5609ea7730957d5df07cf60d417c770b251cdd Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.419420 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d6ff8d77-eccc-4485-bca4-04baf87fb060-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-98q6g\" (UID: \"d6ff8d77-eccc-4485-bca4-04baf87fb060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.433739 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmxxs\" (UniqueName: \"kubernetes.io/projected/d6ff8d77-eccc-4485-bca4-04baf87fb060-kube-api-access-pmxxs\") pod \"cluster-image-registry-operator-dc59b4c8b-98q6g\" (UID: \"d6ff8d77-eccc-4485-bca4-04baf87fb060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.452153 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vc85q" event={"ID":"5c29d863-f1a8-42dc-8916-988d6d45f3d9","Type":"ContainerStarted","Data":"4ce35066677641ba023298363c67a98b43fe9954b1d6c00f6a1a900ab44d81ed"} Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.452216 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vc85q" event={"ID":"5c29d863-f1a8-42dc-8916-988d6d45f3d9","Type":"ContainerStarted","Data":"c9e603e551d2ec7f53014326d6f18c85b0c3e79a2bd4e8a712d3543ee3bda684"} Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.454817 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz" event={"ID":"28197d7d-0557-4f1b-822c-2de0acf2e094","Type":"ContainerStarted","Data":"3e686aa2b8c4910517d5f613ce9e2cd7f4c650832c41f5f22c4d5bbcff25842e"} Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.461150 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8rbk\" (UniqueName: \"kubernetes.io/projected/ea96a27d-8cce-4f3a-b634-97c6c0693dfb-kube-api-access-c8rbk\") pod \"migrator-59844c95c7-v9xj2\" (UID: \"ea96a27d-8cce-4f3a-b634-97c6c0693dfb\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v9xj2" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.461293 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.466047 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8dqm" event={"ID":"1d0e5f83-447a-4e16-a09a-3b28ecc8726f","Type":"ContainerStarted","Data":"61855db2d019aef1e940cadd88d3e3f89c1924cfd8c4a83577e88cb3806dfbef"} Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.466093 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8dqm" event={"ID":"1d0e5f83-447a-4e16-a09a-3b28ecc8726f","Type":"ContainerStarted","Data":"feece90c8cecb5b8f768f95f16c59bcd9767cedad863145624d6c5a20d677316"} Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.466103 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8dqm" event={"ID":"1d0e5f83-447a-4e16-a09a-3b28ecc8726f","Type":"ContainerStarted","Data":"7feccb14bcecaa3d12da2dcd609c879bf857381b40327e7b11914c3b64d9ede4"} Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.466266 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xq5cv"] Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.467813 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" event={"ID":"ab58fe84-53f7-4d26-9606-364d442d40d2","Type":"ContainerStarted","Data":"e816d3de62aa7468a315d72e2c5609ea7730957d5df07cf60d417c770b251cdd"} Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.472403 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-rb46f"] Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.473338 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" event={"ID":"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212","Type":"ContainerStarted","Data":"13b0e1fec67059beb53d4aa6b273e3c4272a35c73d132c2c60b823baa5969090"} Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.473377 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" event={"ID":"7c44ff9c-f099-4ece-9c89-9b3d8e6e1212","Type":"ContainerStarted","Data":"08a84223038bf70fb0c8bfddd942e700dbbc828b4647819079b1df56641e850a"} Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.475624 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g597\" (UniqueName: \"kubernetes.io/projected/319c2ac2-dec9-4935-ae29-bc9b663d9820-kube-api-access-2g597\") pod \"machine-approver-56656f9798-gchkj\" (UID: \"319c2ac2-dec9-4935-ae29-bc9b663d9820\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.475795 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" event={"ID":"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70","Type":"ContainerStarted","Data":"c6cf6d530c76ca558c172a358a4ecbabeedb36f9d6ec144c2c2918c0fac05835"} Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.475816 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" event={"ID":"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70","Type":"ContainerStarted","Data":"ba012dd36a26a894b1ad83b96a26aaa539bb4af3d91529f6777d9aa252aa00e4"} Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.475994 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.479105 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" event={"ID":"161e9b26-1b52-43e8-90a1-5dae906eec38","Type":"ContainerStarted","Data":"b8efc901ca7ac9b0d292483c163a3582923fab1b483bea920f687d8caf60b46b"} Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.479568 4981 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zsq6j container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.479604 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" podUID="f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.515919 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.524270 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.540106 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" Jan 28 15:05:40 crc kubenswrapper[4981]: W0128 15:05:40.545109 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod319c2ac2_dec9_4935_ae29_bc9b663d9820.slice/crio-e786866a7662c95e616831135314251e218a4af55b6f86867ffeadd062e9a90d WatchSource:0}: Error finding container e786866a7662c95e616831135314251e218a4af55b6f86867ffeadd062e9a90d: Status 404 returned error can't find the container with id e786866a7662c95e616831135314251e218a4af55b6f86867ffeadd062e9a90d Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.554544 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v9xj2" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570558 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570593 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08a3b621-305d-4655-bee4-78dc9766a1d1-metrics-tls\") pod \"dns-operator-744455d44c-cl8rz\" (UID: \"08a3b621-305d-4655-bee4-78dc9766a1d1\") " pod="openshift-dns-operator/dns-operator-744455d44c-cl8rz" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570627 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-registry-certificates\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570650 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-registry-tls\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570682 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c8844ca-8b4a-4507-af56-255af25c0fdc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rm54g\" (UID: \"8c8844ca-8b4a-4507-af56-255af25c0fdc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570698 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-encryption-config\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570759 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570779 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvtwv\" (UniqueName: \"kubernetes.io/projected/08a3b621-305d-4655-bee4-78dc9766a1d1-kube-api-access-pvtwv\") pod \"dns-operator-744455d44c-cl8rz\" (UID: \"08a3b621-305d-4655-bee4-78dc9766a1d1\") " pod="openshift-dns-operator/dns-operator-744455d44c-cl8rz" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570797 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-config\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570815 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwhxz\" (UniqueName: \"kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-kube-api-access-dwhxz\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570842 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl7p5\" (UniqueName: \"kubernetes.io/projected/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-kube-api-access-cl7p5\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570866 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570893 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-etcd-serving-ca\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570919 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-image-import-ca\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570936 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-audit-dir\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.570958 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7l6t\" (UniqueName: \"kubernetes.io/projected/8c8844ca-8b4a-4507-af56-255af25c0fdc-kube-api-access-n7l6t\") pod \"openshift-controller-manager-operator-756b6f6bc6-rm54g\" (UID: \"8c8844ca-8b4a-4507-af56-255af25c0fdc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571022 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p6k2\" (UniqueName: \"kubernetes.io/projected/e263507d-26bb-4417-80fb-24a64dda98f5-kube-api-access-6p6k2\") pod \"machine-config-controller-84d6567774-mn9sc\" (UID: \"e263507d-26bb-4417-80fb-24a64dda98f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571069 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lc28t\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571087 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e263507d-26bb-4417-80fb-24a64dda98f5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mn9sc\" (UID: \"e263507d-26bb-4417-80fb-24a64dda98f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571116 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e263507d-26bb-4417-80fb-24a64dda98f5-proxy-tls\") pod \"machine-config-controller-84d6567774-mn9sc\" (UID: \"e263507d-26bb-4417-80fb-24a64dda98f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571140 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzqp6\" (UniqueName: \"kubernetes.io/projected/c0008f36-c407-4f88-9da3-55a32f23bf4d-kube-api-access-kzqp6\") pod \"controller-manager-879f6c89f-lc28t\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571158 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-node-pullsecrets\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571214 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8844ca-8b4a-4507-af56-255af25c0fdc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rm54g\" (UID: \"8c8844ca-8b4a-4507-af56-255af25c0fdc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571244 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-audit\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571260 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-etcd-client\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571352 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-config\") pod \"controller-manager-879f6c89f-lc28t\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571375 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0008f36-c407-4f88-9da3-55a32f23bf4d-serving-cert\") pod \"controller-manager-879f6c89f-lc28t\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571398 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571420 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-bound-sa-token\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571442 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-trusted-ca\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571490 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-client-ca\") pod \"controller-manager-879f6c89f-lc28t\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.571513 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-serving-cert\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: E0128 15:05:40.575643 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:41.075623989 +0000 UTC m=+152.527782230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.673270 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.674308 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x5xb\" (UniqueName: \"kubernetes.io/projected/0c082a26-a425-4d2d-b547-f74498853a6b-kube-api-access-8x5xb\") pod \"catalog-operator-68c6474976-7j587\" (UID: \"0c082a26-a425-4d2d-b547-f74498853a6b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.674335 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9879f20-7ec3-46f5-b58c-6f49e431d23f-service-ca-bundle\") pod \"router-default-5444994796-bnsn8\" (UID: \"d9879f20-7ec3-46f5-b58c-6f49e431d23f\") " pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.674384 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.674512 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-bound-sa-token\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.675891 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd2d3121-daf0-45ca-b5bd-f78fb878d6e3-cert\") pod \"ingress-canary-xp9h9\" (UID: \"bd2d3121-daf0-45ca-b5bd-f78fb878d6e3\") " pod="openshift-ingress-canary/ingress-canary-xp9h9" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.675917 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-etcd-ca\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.675957 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-trusted-ca\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.675977 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-client-ca\") pod \"controller-manager-879f6c89f-lc28t\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676001 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c63cf8f0-2f23-4132-8829-fea088becd6f-config\") pod \"service-ca-operator-777779d784-qrcvg\" (UID: \"c63cf8f0-2f23-4132-8829-fea088becd6f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676041 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2889542-ffb9-4af8-8f77-ccfd601dec88-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nskbs\" (UID: \"a2889542-ffb9-4af8-8f77-ccfd601dec88\") " pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676068 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-serving-cert\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676118 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9879f20-7ec3-46f5-b58c-6f49e431d23f-metrics-certs\") pod \"router-default-5444994796-bnsn8\" (UID: \"d9879f20-7ec3-46f5-b58c-6f49e431d23f\") " pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676137 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc8s2\" (UniqueName: \"kubernetes.io/projected/d919cc22-349d-44d4-9715-574a49338b02-kube-api-access-bc8s2\") pod \"machine-config-server-ndxj2\" (UID: \"d919cc22-349d-44d4-9715-574a49338b02\") " pod="openshift-machine-config-operator/machine-config-server-ndxj2" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676202 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676223 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08a3b621-305d-4655-bee4-78dc9766a1d1-metrics-tls\") pod \"dns-operator-744455d44c-cl8rz\" (UID: \"08a3b621-305d-4655-bee4-78dc9766a1d1\") " pod="openshift-dns-operator/dns-operator-744455d44c-cl8rz" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676244 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-registry-certificates\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676290 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d919cc22-349d-44d4-9715-574a49338b02-certs\") pod \"machine-config-server-ndxj2\" (UID: \"d919cc22-349d-44d4-9715-574a49338b02\") " pod="openshift-machine-config-operator/machine-config-server-ndxj2" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676313 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-registry-tls\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676333 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/395d9f95-321c-4af1-8ab5-681e84ccbae1-webhook-cert\") pod \"packageserver-d55dfcdfc-9nchn\" (UID: \"395d9f95-321c-4af1-8ab5-681e84ccbae1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676371 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56frc\" (UniqueName: \"kubernetes.io/projected/395d9f95-321c-4af1-8ab5-681e84ccbae1-kube-api-access-56frc\") pod \"packageserver-d55dfcdfc-9nchn\" (UID: \"395d9f95-321c-4af1-8ab5-681e84ccbae1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676390 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faadd380-e6cc-40da-9321-ea2c610c3580-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wwrj7\" (UID: \"faadd380-e6cc-40da-9321-ea2c610c3580\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676413 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c8844ca-8b4a-4507-af56-255af25c0fdc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rm54g\" (UID: \"8c8844ca-8b4a-4507-af56-255af25c0fdc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676451 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-encryption-config\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676472 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0684628c-961e-4847-8396-de662a9e62a3-config\") pod \"kube-controller-manager-operator-78b949d7b-nc6fh\" (UID: \"0684628c-961e-4847-8396-de662a9e62a3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676492 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s8fm\" (UniqueName: \"kubernetes.io/projected/763b096e-8072-4b12-9a49-1081568af0db-kube-api-access-6s8fm\") pod \"ingress-operator-5b745b69d9-ql4rw\" (UID: \"763b096e-8072-4b12-9a49-1081568af0db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676540 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-etcd-client\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676567 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676619 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvtwv\" (UniqueName: \"kubernetes.io/projected/08a3b621-305d-4655-bee4-78dc9766a1d1-kube-api-access-pvtwv\") pod \"dns-operator-744455d44c-cl8rz\" (UID: \"08a3b621-305d-4655-bee4-78dc9766a1d1\") " pod="openshift-dns-operator/dns-operator-744455d44c-cl8rz" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676637 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6efa0abf-efcc-4549-8c82-970b3c150dea-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rkdvv\" (UID: \"6efa0abf-efcc-4549-8c82-970b3c150dea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676673 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwhxz\" (UniqueName: \"kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-kube-api-access-dwhxz\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676694 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl7p5\" (UniqueName: \"kubernetes.io/projected/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-kube-api-access-cl7p5\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676712 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24678df2-62ec-4c8d-8f94-91ae16c8fa04-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6cbf4\" (UID: \"24678df2-62ec-4c8d-8f94-91ae16c8fa04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676732 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d9879f20-7ec3-46f5-b58c-6f49e431d23f-default-certificate\") pod \"router-default-5444994796-bnsn8\" (UID: \"d9879f20-7ec3-46f5-b58c-6f49e431d23f\") " pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676884 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ace4475c-748f-4519-8f32-a70468fb9ee5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7zgzs\" (UID: \"ace4475c-748f-4519-8f32-a70468fb9ee5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676923 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4llfp\" (UniqueName: \"kubernetes.io/projected/6efa0abf-efcc-4549-8c82-970b3c150dea-kube-api-access-4llfp\") pod \"olm-operator-6b444d44fb-rkdvv\" (UID: \"6efa0abf-efcc-4549-8c82-970b3c150dea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676942 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/395d9f95-321c-4af1-8ab5-681e84ccbae1-apiservice-cert\") pod \"packageserver-d55dfcdfc-9nchn\" (UID: \"395d9f95-321c-4af1-8ab5-681e84ccbae1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676961 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/763b096e-8072-4b12-9a49-1081568af0db-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ql4rw\" (UID: \"763b096e-8072-4b12-9a49-1081568af0db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.676997 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa0a406-85d4-4158-a9b9-850224bfbbc5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-k7sgz\" (UID: \"2aa0a406-85d4-4158-a9b9-850224bfbbc5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677038 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kbmx\" (UniqueName: \"kubernetes.io/projected/d9879f20-7ec3-46f5-b58c-6f49e431d23f-kube-api-access-7kbmx\") pod \"router-default-5444994796-bnsn8\" (UID: \"d9879f20-7ec3-46f5-b58c-6f49e431d23f\") " pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677087 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grwdm\" (UniqueName: \"kubernetes.io/projected/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-kube-api-access-grwdm\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677170 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0c082a26-a425-4d2d-b547-f74498853a6b-srv-cert\") pod \"catalog-operator-68c6474976-7j587\" (UID: \"0c082a26-a425-4d2d-b547-f74498853a6b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677230 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8596ade7-4f4d-4f58-acb8-400366812372-csi-data-dir\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677267 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nmqx\" (UniqueName: \"kubernetes.io/projected/24678df2-62ec-4c8d-8f94-91ae16c8fa04-kube-api-access-5nmqx\") pod \"machine-config-operator-74547568cd-6cbf4\" (UID: \"24678df2-62ec-4c8d-8f94-91ae16c8fa04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677305 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lc28t\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677328 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e263507d-26bb-4417-80fb-24a64dda98f5-proxy-tls\") pod \"machine-config-controller-84d6567774-mn9sc\" (UID: \"e263507d-26bb-4417-80fb-24a64dda98f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677367 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed4c6fed-3e17-40a9-b844-adc144028848-config-volume\") pod \"collect-profiles-29493540-78fzf\" (UID: \"ed4c6fed-3e17-40a9-b844-adc144028848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677393 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/24678df2-62ec-4c8d-8f94-91ae16c8fa04-images\") pod \"machine-config-operator-74547568cd-6cbf4\" (UID: \"24678df2-62ec-4c8d-8f94-91ae16c8fa04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677410 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ace4475c-748f-4519-8f32-a70468fb9ee5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7zgzs\" (UID: \"ace4475c-748f-4519-8f32-a70468fb9ee5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677449 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-node-pullsecrets\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: E0128 15:05:40.677517 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:41.177486146 +0000 UTC m=+152.629644577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677578 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-audit\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677622 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thmbb\" (UniqueName: \"kubernetes.io/projected/8596ade7-4f4d-4f58-acb8-400366812372-kube-api-access-thmbb\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677654 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-etcd-client\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677702 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/313fb5fa-63ee-4008-9e6c-94adc6fa6e67-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x2wjc\" (UID: \"313fb5fa-63ee-4008-9e6c-94adc6fa6e67\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x2wjc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677730 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mcxk\" (UniqueName: \"kubernetes.io/projected/ed4c6fed-3e17-40a9-b844-adc144028848-kube-api-access-9mcxk\") pod \"collect-profiles-29493540-78fzf\" (UID: \"ed4c6fed-3e17-40a9-b844-adc144028848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677756 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-config\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677782 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-config\") pod \"controller-manager-879f6c89f-lc28t\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677818 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0008f36-c407-4f88-9da3-55a32f23bf4d-serving-cert\") pod \"controller-manager-879f6c89f-lc28t\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677848 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lr8j\" (UniqueName: \"kubernetes.io/projected/bd2d3121-daf0-45ca-b5bd-f78fb878d6e3-kube-api-access-7lr8j\") pod \"ingress-canary-xp9h9\" (UID: \"bd2d3121-daf0-45ca-b5bd-f78fb878d6e3\") " pod="openshift-ingress-canary/ingress-canary-xp9h9" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677872 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d919cc22-349d-44d4-9715-574a49338b02-node-bootstrap-token\") pod \"machine-config-server-ndxj2\" (UID: \"d919cc22-349d-44d4-9715-574a49338b02\") " pod="openshift-machine-config-operator/machine-config-server-ndxj2" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677932 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c1b17582-e568-4cba-8137-d502636a43cb-signing-cabundle\") pod \"service-ca-9c57cc56f-f495f\" (UID: \"c1b17582-e568-4cba-8137-d502636a43cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-f495f" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.677978 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/24678df2-62ec-4c8d-8f94-91ae16c8fa04-proxy-tls\") pod \"machine-config-operator-74547568cd-6cbf4\" (UID: \"24678df2-62ec-4c8d-8f94-91ae16c8fa04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678003 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed4c6fed-3e17-40a9-b844-adc144028848-secret-volume\") pod \"collect-profiles-29493540-78fzf\" (UID: \"ed4c6fed-3e17-40a9-b844-adc144028848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678026 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-etcd-service-ca\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678066 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/395d9f95-321c-4af1-8ab5-681e84ccbae1-tmpfs\") pod \"packageserver-d55dfcdfc-9nchn\" (UID: \"395d9f95-321c-4af1-8ab5-681e84ccbae1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678090 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/763b096e-8072-4b12-9a49-1081568af0db-metrics-tls\") pod \"ingress-operator-5b745b69d9-ql4rw\" (UID: \"763b096e-8072-4b12-9a49-1081568af0db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678128 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0c082a26-a425-4d2d-b547-f74498853a6b-profile-collector-cert\") pod \"catalog-operator-68c6474976-7j587\" (UID: \"0c082a26-a425-4d2d-b547-f74498853a6b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678150 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m4fz\" (UniqueName: \"kubernetes.io/projected/a2889542-ffb9-4af8-8f77-ccfd601dec88-kube-api-access-5m4fz\") pod \"marketplace-operator-79b997595-nskbs\" (UID: \"a2889542-ffb9-4af8-8f77-ccfd601dec88\") " pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678176 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ace4475c-748f-4519-8f32-a70468fb9ee5-config\") pod \"kube-apiserver-operator-766d6c64bb-7zgzs\" (UID: \"ace4475c-748f-4519-8f32-a70468fb9ee5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678222 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6efa0abf-efcc-4549-8c82-970b3c150dea-srv-cert\") pod \"olm-operator-6b444d44fb-rkdvv\" (UID: \"6efa0abf-efcc-4549-8c82-970b3c150dea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678247 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8596ade7-4f4d-4f58-acb8-400366812372-socket-dir\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678269 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8596ade7-4f4d-4f58-acb8-400366812372-registration-dir\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678295 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8596ade7-4f4d-4f58-acb8-400366812372-plugins-dir\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678330 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-serving-cert\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678354 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q44t\" (UniqueName: \"kubernetes.io/projected/2aa0a406-85d4-4158-a9b9-850224bfbbc5-kube-api-access-9q44t\") pod \"package-server-manager-789f6589d5-k7sgz\" (UID: \"2aa0a406-85d4-4158-a9b9-850224bfbbc5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678377 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8596ade7-4f4d-4f58-acb8-400366812372-mountpoint-dir\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678405 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzdqh\" (UniqueName: \"kubernetes.io/projected/313fb5fa-63ee-4008-9e6c-94adc6fa6e67-kube-api-access-rzdqh\") pod \"control-plane-machine-set-operator-78cbb6b69f-x2wjc\" (UID: \"313fb5fa-63ee-4008-9e6c-94adc6fa6e67\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x2wjc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678454 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c63cf8f0-2f23-4132-8829-fea088becd6f-serving-cert\") pod \"service-ca-operator-777779d784-qrcvg\" (UID: \"c63cf8f0-2f23-4132-8829-fea088becd6f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678490 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-config\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678512 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67zwq\" (UniqueName: \"kubernetes.io/projected/c1b17582-e568-4cba-8137-d502636a43cb-kube-api-access-67zwq\") pod \"service-ca-9c57cc56f-f495f\" (UID: \"c1b17582-e568-4cba-8137-d502636a43cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-f495f" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678537 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-etcd-serving-ca\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678558 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46w9j\" (UniqueName: \"kubernetes.io/projected/c63cf8f0-2f23-4132-8829-fea088becd6f-kube-api-access-46w9j\") pod \"service-ca-operator-777779d784-qrcvg\" (UID: \"c63cf8f0-2f23-4132-8829-fea088becd6f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678579 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0684628c-961e-4847-8396-de662a9e62a3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nc6fh\" (UID: \"0684628c-961e-4847-8396-de662a9e62a3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678618 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-audit-dir\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678643 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2889542-ffb9-4af8-8f77-ccfd601dec88-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nskbs\" (UID: \"a2889542-ffb9-4af8-8f77-ccfd601dec88\") " pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678662 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xnv4\" (UniqueName: \"kubernetes.io/projected/5523855c-574f-47f3-8e9a-8ddbde1a35f6-kube-api-access-5xnv4\") pod \"multus-admission-controller-857f4d67dd-nxh5j\" (UID: \"5523855c-574f-47f3-8e9a-8ddbde1a35f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nxh5j" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678700 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-image-import-ca\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678723 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faadd380-e6cc-40da-9321-ea2c610c3580-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wwrj7\" (UID: \"faadd380-e6cc-40da-9321-ea2c610c3580\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678746 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae035e4-8b4b-4ff2-8be5-a68d742677f0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8wqg4\" (UID: \"eae035e4-8b4b-4ff2-8be5-a68d742677f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678772 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7l6t\" (UniqueName: \"kubernetes.io/projected/8c8844ca-8b4a-4507-af56-255af25c0fdc-kube-api-access-n7l6t\") pod \"openshift-controller-manager-operator-756b6f6bc6-rm54g\" (UID: \"8c8844ca-8b4a-4507-af56-255af25c0fdc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678794 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c1b17582-e568-4cba-8137-d502636a43cb-signing-key\") pod \"service-ca-9c57cc56f-f495f\" (UID: \"c1b17582-e568-4cba-8137-d502636a43cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-f495f" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678814 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj4wm\" (UniqueName: \"kubernetes.io/projected/eae035e4-8b4b-4ff2-8be5-a68d742677f0-kube-api-access-nj4wm\") pod \"kube-storage-version-migrator-operator-b67b599dd-8wqg4\" (UID: \"eae035e4-8b4b-4ff2-8be5-a68d742677f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678894 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5523855c-574f-47f3-8e9a-8ddbde1a35f6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nxh5j\" (UID: \"5523855c-574f-47f3-8e9a-8ddbde1a35f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nxh5j" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678930 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faadd380-e6cc-40da-9321-ea2c610c3580-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wwrj7\" (UID: \"faadd380-e6cc-40da-9321-ea2c610c3580\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678961 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p6k2\" (UniqueName: \"kubernetes.io/projected/e263507d-26bb-4417-80fb-24a64dda98f5-kube-api-access-6p6k2\") pod \"machine-config-controller-84d6567774-mn9sc\" (UID: \"e263507d-26bb-4417-80fb-24a64dda98f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.678986 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e263507d-26bb-4417-80fb-24a64dda98f5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mn9sc\" (UID: \"e263507d-26bb-4417-80fb-24a64dda98f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.679010 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/42bc389e-2653-4d2a-a220-6d8f81523991-metrics-tls\") pod \"dns-default-74zjr\" (UID: \"42bc389e-2653-4d2a-a220-6d8f81523991\") " pod="openshift-dns/dns-default-74zjr" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.679033 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eae035e4-8b4b-4ff2-8be5-a68d742677f0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8wqg4\" (UID: \"eae035e4-8b4b-4ff2-8be5-a68d742677f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.679097 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzqp6\" (UniqueName: \"kubernetes.io/projected/c0008f36-c407-4f88-9da3-55a32f23bf4d-kube-api-access-kzqp6\") pod \"controller-manager-879f6c89f-lc28t\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.679140 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d9879f20-7ec3-46f5-b58c-6f49e431d23f-stats-auth\") pod \"router-default-5444994796-bnsn8\" (UID: \"d9879f20-7ec3-46f5-b58c-6f49e431d23f\") " pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.679176 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/763b096e-8072-4b12-9a49-1081568af0db-trusted-ca\") pod \"ingress-operator-5b745b69d9-ql4rw\" (UID: \"763b096e-8072-4b12-9a49-1081568af0db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.685157 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42bc389e-2653-4d2a-a220-6d8f81523991-config-volume\") pod \"dns-default-74zjr\" (UID: \"42bc389e-2653-4d2a-a220-6d8f81523991\") " pod="openshift-dns/dns-default-74zjr" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.685244 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctwm6\" (UniqueName: \"kubernetes.io/projected/42bc389e-2653-4d2a-a220-6d8f81523991-kube-api-access-ctwm6\") pod \"dns-default-74zjr\" (UID: \"42bc389e-2653-4d2a-a220-6d8f81523991\") " pod="openshift-dns/dns-default-74zjr" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.685288 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8844ca-8b4a-4507-af56-255af25c0fdc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rm54g\" (UID: \"8c8844ca-8b4a-4507-af56-255af25c0fdc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.685316 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0684628c-961e-4847-8396-de662a9e62a3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nc6fh\" (UID: \"0684628c-961e-4847-8396-de662a9e62a3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.688363 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-lc28t\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.690307 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-trusted-ca-bundle\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.692590 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c8844ca-8b4a-4507-af56-255af25c0fdc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rm54g\" (UID: \"8c8844ca-8b4a-4507-af56-255af25c0fdc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.692783 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-node-pullsecrets\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.694970 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.704565 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-config\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.706307 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-client-ca\") pod \"controller-manager-879f6c89f-lc28t\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.709402 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-trusted-ca\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.711368 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-etcd-serving-ca\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.711674 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-config\") pod \"controller-manager-879f6c89f-lc28t\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.713453 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-audit-dir\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.714456 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-image-import-ca\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.714989 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-audit\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.715511 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e263507d-26bb-4417-80fb-24a64dda98f5-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mn9sc\" (UID: \"e263507d-26bb-4417-80fb-24a64dda98f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.716023 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c8844ca-8b4a-4507-af56-255af25c0fdc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rm54g\" (UID: \"8c8844ca-8b4a-4507-af56-255af25c0fdc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.717477 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0008f36-c407-4f88-9da3-55a32f23bf4d-serving-cert\") pod \"controller-manager-879f6c89f-lc28t\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.719225 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-etcd-client\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.720349 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-serving-cert\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.727721 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-registry-certificates\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.732078 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/08a3b621-305d-4655-bee4-78dc9766a1d1-metrics-tls\") pod \"dns-operator-744455d44c-cl8rz\" (UID: \"08a3b621-305d-4655-bee4-78dc9766a1d1\") " pod="openshift-dns-operator/dns-operator-744455d44c-cl8rz" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.735392 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-registry-tls\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.735749 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lm8cf"] Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.737099 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e263507d-26bb-4417-80fb-24a64dda98f5-proxy-tls\") pod \"machine-config-controller-84d6567774-mn9sc\" (UID: \"e263507d-26bb-4417-80fb-24a64dda98f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.742660 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-encryption-config\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.749061 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwhxz\" (UniqueName: \"kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-kube-api-access-dwhxz\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.753336 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-8md5t"] Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.760782 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl7p5\" (UniqueName: \"kubernetes.io/projected/9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6-kube-api-access-cl7p5\") pod \"apiserver-76f77b778f-bj272\" (UID: \"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6\") " pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.761663 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.774342 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvtwv\" (UniqueName: \"kubernetes.io/projected/08a3b621-305d-4655-bee4-78dc9766a1d1-kube-api-access-pvtwv\") pod \"dns-operator-744455d44c-cl8rz\" (UID: \"08a3b621-305d-4655-bee4-78dc9766a1d1\") " pod="openshift-dns-operator/dns-operator-744455d44c-cl8rz" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.788852 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzdqh\" (UniqueName: \"kubernetes.io/projected/313fb5fa-63ee-4008-9e6c-94adc6fa6e67-kube-api-access-rzdqh\") pod \"control-plane-machine-set-operator-78cbb6b69f-x2wjc\" (UID: \"313fb5fa-63ee-4008-9e6c-94adc6fa6e67\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x2wjc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.789080 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8596ade7-4f4d-4f58-acb8-400366812372-mountpoint-dir\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.789120 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c63cf8f0-2f23-4132-8829-fea088becd6f-serving-cert\") pod \"service-ca-operator-777779d784-qrcvg\" (UID: \"c63cf8f0-2f23-4132-8829-fea088becd6f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.789149 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67zwq\" (UniqueName: \"kubernetes.io/projected/c1b17582-e568-4cba-8137-d502636a43cb-kube-api-access-67zwq\") pod \"service-ca-9c57cc56f-f495f\" (UID: \"c1b17582-e568-4cba-8137-d502636a43cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-f495f" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.789168 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0684628c-961e-4847-8396-de662a9e62a3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nc6fh\" (UID: \"0684628c-961e-4847-8396-de662a9e62a3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.789849 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/8596ade7-4f4d-4f58-acb8-400366812372-mountpoint-dir\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790044 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46w9j\" (UniqueName: \"kubernetes.io/projected/c63cf8f0-2f23-4132-8829-fea088becd6f-kube-api-access-46w9j\") pod \"service-ca-operator-777779d784-qrcvg\" (UID: \"c63cf8f0-2f23-4132-8829-fea088becd6f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790112 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2889542-ffb9-4af8-8f77-ccfd601dec88-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nskbs\" (UID: \"a2889542-ffb9-4af8-8f77-ccfd601dec88\") " pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790131 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xnv4\" (UniqueName: \"kubernetes.io/projected/5523855c-574f-47f3-8e9a-8ddbde1a35f6-kube-api-access-5xnv4\") pod \"multus-admission-controller-857f4d67dd-nxh5j\" (UID: \"5523855c-574f-47f3-8e9a-8ddbde1a35f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nxh5j" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790158 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faadd380-e6cc-40da-9321-ea2c610c3580-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wwrj7\" (UID: \"faadd380-e6cc-40da-9321-ea2c610c3580\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790179 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae035e4-8b4b-4ff2-8be5-a68d742677f0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8wqg4\" (UID: \"eae035e4-8b4b-4ff2-8be5-a68d742677f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790224 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c1b17582-e568-4cba-8137-d502636a43cb-signing-key\") pod \"service-ca-9c57cc56f-f495f\" (UID: \"c1b17582-e568-4cba-8137-d502636a43cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-f495f" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790245 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj4wm\" (UniqueName: \"kubernetes.io/projected/eae035e4-8b4b-4ff2-8be5-a68d742677f0-kube-api-access-nj4wm\") pod \"kube-storage-version-migrator-operator-b67b599dd-8wqg4\" (UID: \"eae035e4-8b4b-4ff2-8be5-a68d742677f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790268 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5523855c-574f-47f3-8e9a-8ddbde1a35f6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nxh5j\" (UID: \"5523855c-574f-47f3-8e9a-8ddbde1a35f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nxh5j" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790309 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faadd380-e6cc-40da-9321-ea2c610c3580-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wwrj7\" (UID: \"faadd380-e6cc-40da-9321-ea2c610c3580\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790332 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/42bc389e-2653-4d2a-a220-6d8f81523991-metrics-tls\") pod \"dns-default-74zjr\" (UID: \"42bc389e-2653-4d2a-a220-6d8f81523991\") " pod="openshift-dns/dns-default-74zjr" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790350 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eae035e4-8b4b-4ff2-8be5-a68d742677f0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8wqg4\" (UID: \"eae035e4-8b4b-4ff2-8be5-a68d742677f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790379 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d9879f20-7ec3-46f5-b58c-6f49e431d23f-stats-auth\") pod \"router-default-5444994796-bnsn8\" (UID: \"d9879f20-7ec3-46f5-b58c-6f49e431d23f\") " pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790396 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42bc389e-2653-4d2a-a220-6d8f81523991-config-volume\") pod \"dns-default-74zjr\" (UID: \"42bc389e-2653-4d2a-a220-6d8f81523991\") " pod="openshift-dns/dns-default-74zjr" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790411 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctwm6\" (UniqueName: \"kubernetes.io/projected/42bc389e-2653-4d2a-a220-6d8f81523991-kube-api-access-ctwm6\") pod \"dns-default-74zjr\" (UID: \"42bc389e-2653-4d2a-a220-6d8f81523991\") " pod="openshift-dns/dns-default-74zjr" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790428 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/763b096e-8072-4b12-9a49-1081568af0db-trusted-ca\") pod \"ingress-operator-5b745b69d9-ql4rw\" (UID: \"763b096e-8072-4b12-9a49-1081568af0db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790449 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0684628c-961e-4847-8396-de662a9e62a3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nc6fh\" (UID: \"0684628c-961e-4847-8396-de662a9e62a3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790484 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x5xb\" (UniqueName: \"kubernetes.io/projected/0c082a26-a425-4d2d-b547-f74498853a6b-kube-api-access-8x5xb\") pod \"catalog-operator-68c6474976-7j587\" (UID: \"0c082a26-a425-4d2d-b547-f74498853a6b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790500 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9879f20-7ec3-46f5-b58c-6f49e431d23f-service-ca-bundle\") pod \"router-default-5444994796-bnsn8\" (UID: \"d9879f20-7ec3-46f5-b58c-6f49e431d23f\") " pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790518 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd2d3121-daf0-45ca-b5bd-f78fb878d6e3-cert\") pod \"ingress-canary-xp9h9\" (UID: \"bd2d3121-daf0-45ca-b5bd-f78fb878d6e3\") " pod="openshift-ingress-canary/ingress-canary-xp9h9" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790537 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-etcd-ca\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790558 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c63cf8f0-2f23-4132-8829-fea088becd6f-config\") pod \"service-ca-operator-777779d784-qrcvg\" (UID: \"c63cf8f0-2f23-4132-8829-fea088becd6f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790580 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2889542-ffb9-4af8-8f77-ccfd601dec88-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nskbs\" (UID: \"a2889542-ffb9-4af8-8f77-ccfd601dec88\") " pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790599 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc8s2\" (UniqueName: \"kubernetes.io/projected/d919cc22-349d-44d4-9715-574a49338b02-kube-api-access-bc8s2\") pod \"machine-config-server-ndxj2\" (UID: \"d919cc22-349d-44d4-9715-574a49338b02\") " pod="openshift-machine-config-operator/machine-config-server-ndxj2" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790615 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9879f20-7ec3-46f5-b58c-6f49e431d23f-metrics-certs\") pod \"router-default-5444994796-bnsn8\" (UID: \"d9879f20-7ec3-46f5-b58c-6f49e431d23f\") " pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790635 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56frc\" (UniqueName: \"kubernetes.io/projected/395d9f95-321c-4af1-8ab5-681e84ccbae1-kube-api-access-56frc\") pod \"packageserver-d55dfcdfc-9nchn\" (UID: \"395d9f95-321c-4af1-8ab5-681e84ccbae1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790652 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faadd380-e6cc-40da-9321-ea2c610c3580-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wwrj7\" (UID: \"faadd380-e6cc-40da-9321-ea2c610c3580\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790666 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d919cc22-349d-44d4-9715-574a49338b02-certs\") pod \"machine-config-server-ndxj2\" (UID: \"d919cc22-349d-44d4-9715-574a49338b02\") " pod="openshift-machine-config-operator/machine-config-server-ndxj2" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790685 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/395d9f95-321c-4af1-8ab5-681e84ccbae1-webhook-cert\") pod \"packageserver-d55dfcdfc-9nchn\" (UID: \"395d9f95-321c-4af1-8ab5-681e84ccbae1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790699 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s8fm\" (UniqueName: \"kubernetes.io/projected/763b096e-8072-4b12-9a49-1081568af0db-kube-api-access-6s8fm\") pod \"ingress-operator-5b745b69d9-ql4rw\" (UID: \"763b096e-8072-4b12-9a49-1081568af0db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790715 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0684628c-961e-4847-8396-de662a9e62a3-config\") pod \"kube-controller-manager-operator-78b949d7b-nc6fh\" (UID: \"0684628c-961e-4847-8396-de662a9e62a3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790732 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-etcd-client\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790755 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6efa0abf-efcc-4549-8c82-970b3c150dea-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rkdvv\" (UID: \"6efa0abf-efcc-4549-8c82-970b3c150dea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790772 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24678df2-62ec-4c8d-8f94-91ae16c8fa04-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6cbf4\" (UID: \"24678df2-62ec-4c8d-8f94-91ae16c8fa04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790796 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790812 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d9879f20-7ec3-46f5-b58c-6f49e431d23f-default-certificate\") pod \"router-default-5444994796-bnsn8\" (UID: \"d9879f20-7ec3-46f5-b58c-6f49e431d23f\") " pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790836 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ace4475c-748f-4519-8f32-a70468fb9ee5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7zgzs\" (UID: \"ace4475c-748f-4519-8f32-a70468fb9ee5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.790856 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4llfp\" (UniqueName: \"kubernetes.io/projected/6efa0abf-efcc-4549-8c82-970b3c150dea-kube-api-access-4llfp\") pod \"olm-operator-6b444d44fb-rkdvv\" (UID: \"6efa0abf-efcc-4549-8c82-970b3c150dea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.794766 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/395d9f95-321c-4af1-8ab5-681e84ccbae1-apiservice-cert\") pod \"packageserver-d55dfcdfc-9nchn\" (UID: \"395d9f95-321c-4af1-8ab5-681e84ccbae1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.794808 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/763b096e-8072-4b12-9a49-1081568af0db-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ql4rw\" (UID: \"763b096e-8072-4b12-9a49-1081568af0db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.794833 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kbmx\" (UniqueName: \"kubernetes.io/projected/d9879f20-7ec3-46f5-b58c-6f49e431d23f-kube-api-access-7kbmx\") pod \"router-default-5444994796-bnsn8\" (UID: \"d9879f20-7ec3-46f5-b58c-6f49e431d23f\") " pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.794852 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grwdm\" (UniqueName: \"kubernetes.io/projected/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-kube-api-access-grwdm\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.794874 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa0a406-85d4-4158-a9b9-850224bfbbc5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-k7sgz\" (UID: \"2aa0a406-85d4-4158-a9b9-850224bfbbc5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.794924 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0c082a26-a425-4d2d-b547-f74498853a6b-srv-cert\") pod \"catalog-operator-68c6474976-7j587\" (UID: \"0c082a26-a425-4d2d-b547-f74498853a6b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.794948 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nmqx\" (UniqueName: \"kubernetes.io/projected/24678df2-62ec-4c8d-8f94-91ae16c8fa04-kube-api-access-5nmqx\") pod \"machine-config-operator-74547568cd-6cbf4\" (UID: \"24678df2-62ec-4c8d-8f94-91ae16c8fa04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.794972 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8596ade7-4f4d-4f58-acb8-400366812372-csi-data-dir\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.794999 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed4c6fed-3e17-40a9-b844-adc144028848-config-volume\") pod \"collect-profiles-29493540-78fzf\" (UID: \"ed4c6fed-3e17-40a9-b844-adc144028848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.795023 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/24678df2-62ec-4c8d-8f94-91ae16c8fa04-images\") pod \"machine-config-operator-74547568cd-6cbf4\" (UID: \"24678df2-62ec-4c8d-8f94-91ae16c8fa04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.795043 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ace4475c-748f-4519-8f32-a70468fb9ee5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7zgzs\" (UID: \"ace4475c-748f-4519-8f32-a70468fb9ee5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.795091 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thmbb\" (UniqueName: \"kubernetes.io/projected/8596ade7-4f4d-4f58-acb8-400366812372-kube-api-access-thmbb\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.795118 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/313fb5fa-63ee-4008-9e6c-94adc6fa6e67-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x2wjc\" (UID: \"313fb5fa-63ee-4008-9e6c-94adc6fa6e67\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x2wjc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.795144 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mcxk\" (UniqueName: \"kubernetes.io/projected/ed4c6fed-3e17-40a9-b844-adc144028848-kube-api-access-9mcxk\") pod \"collect-profiles-29493540-78fzf\" (UID: \"ed4c6fed-3e17-40a9-b844-adc144028848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.795166 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-config\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.795200 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d919cc22-349d-44d4-9715-574a49338b02-node-bootstrap-token\") pod \"machine-config-server-ndxj2\" (UID: \"d919cc22-349d-44d4-9715-574a49338b02\") " pod="openshift-machine-config-operator/machine-config-server-ndxj2" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.795226 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lr8j\" (UniqueName: \"kubernetes.io/projected/bd2d3121-daf0-45ca-b5bd-f78fb878d6e3-kube-api-access-7lr8j\") pod \"ingress-canary-xp9h9\" (UID: \"bd2d3121-daf0-45ca-b5bd-f78fb878d6e3\") " pod="openshift-ingress-canary/ingress-canary-xp9h9" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.795244 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c1b17582-e568-4cba-8137-d502636a43cb-signing-cabundle\") pod \"service-ca-9c57cc56f-f495f\" (UID: \"c1b17582-e568-4cba-8137-d502636a43cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-f495f" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.795270 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/24678df2-62ec-4c8d-8f94-91ae16c8fa04-proxy-tls\") pod \"machine-config-operator-74547568cd-6cbf4\" (UID: \"24678df2-62ec-4c8d-8f94-91ae16c8fa04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.795289 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-etcd-service-ca\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.795360 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed4c6fed-3e17-40a9-b844-adc144028848-secret-volume\") pod \"collect-profiles-29493540-78fzf\" (UID: \"ed4c6fed-3e17-40a9-b844-adc144028848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.795417 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/395d9f95-321c-4af1-8ab5-681e84ccbae1-tmpfs\") pod \"packageserver-d55dfcdfc-9nchn\" (UID: \"395d9f95-321c-4af1-8ab5-681e84ccbae1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.795842 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-etcd-service-ca\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.796412 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/763b096e-8072-4b12-9a49-1081568af0db-metrics-tls\") pod \"ingress-operator-5b745b69d9-ql4rw\" (UID: \"763b096e-8072-4b12-9a49-1081568af0db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.796456 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0c082a26-a425-4d2d-b547-f74498853a6b-profile-collector-cert\") pod \"catalog-operator-68c6474976-7j587\" (UID: \"0c082a26-a425-4d2d-b547-f74498853a6b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.796481 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m4fz\" (UniqueName: \"kubernetes.io/projected/a2889542-ffb9-4af8-8f77-ccfd601dec88-kube-api-access-5m4fz\") pod \"marketplace-operator-79b997595-nskbs\" (UID: \"a2889542-ffb9-4af8-8f77-ccfd601dec88\") " pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.796499 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6efa0abf-efcc-4549-8c82-970b3c150dea-srv-cert\") pod \"olm-operator-6b444d44fb-rkdvv\" (UID: \"6efa0abf-efcc-4549-8c82-970b3c150dea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.796516 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8596ade7-4f4d-4f58-acb8-400366812372-socket-dir\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.796534 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ace4475c-748f-4519-8f32-a70468fb9ee5-config\") pod \"kube-apiserver-operator-766d6c64bb-7zgzs\" (UID: \"ace4475c-748f-4519-8f32-a70468fb9ee5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.796649 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2889542-ffb9-4af8-8f77-ccfd601dec88-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nskbs\" (UID: \"a2889542-ffb9-4af8-8f77-ccfd601dec88\") " pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.796795 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8596ade7-4f4d-4f58-acb8-400366812372-registration-dir\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.796799 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c63cf8f0-2f23-4132-8829-fea088becd6f-config\") pod \"service-ca-operator-777779d784-qrcvg\" (UID: \"c63cf8f0-2f23-4132-8829-fea088becd6f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.796880 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/395d9f95-321c-4af1-8ab5-681e84ccbae1-tmpfs\") pod \"packageserver-d55dfcdfc-9nchn\" (UID: \"395d9f95-321c-4af1-8ab5-681e84ccbae1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.797344 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae035e4-8b4b-4ff2-8be5-a68d742677f0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-8wqg4\" (UID: \"eae035e4-8b4b-4ff2-8be5-a68d742677f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.797450 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-etcd-ca\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.798374 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8596ade7-4f4d-4f58-acb8-400366812372-registration-dir\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.798500 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8596ade7-4f4d-4f58-acb8-400366812372-plugins-dir\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.798534 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-serving-cert\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.798601 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q44t\" (UniqueName: \"kubernetes.io/projected/2aa0a406-85d4-4158-a9b9-850224bfbbc5-kube-api-access-9q44t\") pod \"package-server-manager-789f6589d5-k7sgz\" (UID: \"2aa0a406-85d4-4158-a9b9-850224bfbbc5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.799268 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/8596ade7-4f4d-4f58-acb8-400366812372-plugins-dir\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.802497 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8596ade7-4f4d-4f58-acb8-400366812372-socket-dir\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.805343 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c1b17582-e568-4cba-8137-d502636a43cb-signing-cabundle\") pod \"service-ca-9c57cc56f-f495f\" (UID: \"c1b17582-e568-4cba-8137-d502636a43cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-f495f" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.805464 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2889542-ffb9-4af8-8f77-ccfd601dec88-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nskbs\" (UID: \"a2889542-ffb9-4af8-8f77-ccfd601dec88\") " pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.805812 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-config\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.806203 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ace4475c-748f-4519-8f32-a70468fb9ee5-config\") pod \"kube-apiserver-operator-766d6c64bb-7zgzs\" (UID: \"ace4475c-748f-4519-8f32-a70468fb9ee5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.808671 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/395d9f95-321c-4af1-8ab5-681e84ccbae1-apiservice-cert\") pod \"packageserver-d55dfcdfc-9nchn\" (UID: \"395d9f95-321c-4af1-8ab5-681e84ccbae1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.809059 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c63cf8f0-2f23-4132-8829-fea088becd6f-serving-cert\") pod \"service-ca-operator-777779d784-qrcvg\" (UID: \"c63cf8f0-2f23-4132-8829-fea088becd6f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.809628 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/763b096e-8072-4b12-9a49-1081568af0db-trusted-ca\") pod \"ingress-operator-5b745b69d9-ql4rw\" (UID: \"763b096e-8072-4b12-9a49-1081568af0db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.809882 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9879f20-7ec3-46f5-b58c-6f49e431d23f-service-ca-bundle\") pod \"router-default-5444994796-bnsn8\" (UID: \"d9879f20-7ec3-46f5-b58c-6f49e431d23f\") " pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.810082 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/8596ade7-4f4d-4f58-acb8-400366812372-csi-data-dir\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:40 crc kubenswrapper[4981]: E0128 15:05:40.810270 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:41.310246362 +0000 UTC m=+152.762404603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.810931 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0c082a26-a425-4d2d-b547-f74498853a6b-profile-collector-cert\") pod \"catalog-operator-68c6474976-7j587\" (UID: \"0c082a26-a425-4d2d-b547-f74498853a6b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.811016 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0684628c-961e-4847-8396-de662a9e62a3-config\") pod \"kube-controller-manager-operator-78b949d7b-nc6fh\" (UID: \"0684628c-961e-4847-8396-de662a9e62a3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.811056 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/42bc389e-2653-4d2a-a220-6d8f81523991-metrics-tls\") pod \"dns-default-74zjr\" (UID: \"42bc389e-2653-4d2a-a220-6d8f81523991\") " pod="openshift-dns/dns-default-74zjr" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.811495 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed4c6fed-3e17-40a9-b844-adc144028848-config-volume\") pod \"collect-profiles-29493540-78fzf\" (UID: \"ed4c6fed-3e17-40a9-b844-adc144028848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.811612 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d9879f20-7ec3-46f5-b58c-6f49e431d23f-metrics-certs\") pod \"router-default-5444994796-bnsn8\" (UID: \"d9879f20-7ec3-46f5-b58c-6f49e431d23f\") " pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.811738 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eae035e4-8b4b-4ff2-8be5-a68d742677f0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-8wqg4\" (UID: \"eae035e4-8b4b-4ff2-8be5-a68d742677f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.812135 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/24678df2-62ec-4c8d-8f94-91ae16c8fa04-images\") pod \"machine-config-operator-74547568cd-6cbf4\" (UID: \"24678df2-62ec-4c8d-8f94-91ae16c8fa04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.812812 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed4c6fed-3e17-40a9-b844-adc144028848-secret-volume\") pod \"collect-profiles-29493540-78fzf\" (UID: \"ed4c6fed-3e17-40a9-b844-adc144028848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.813753 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/24678df2-62ec-4c8d-8f94-91ae16c8fa04-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6cbf4\" (UID: \"24678df2-62ec-4c8d-8f94-91ae16c8fa04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.814761 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42bc389e-2653-4d2a-a220-6d8f81523991-config-volume\") pod \"dns-default-74zjr\" (UID: \"42bc389e-2653-4d2a-a220-6d8f81523991\") " pod="openshift-dns/dns-default-74zjr" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.815016 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d9879f20-7ec3-46f5-b58c-6f49e431d23f-stats-auth\") pod \"router-default-5444994796-bnsn8\" (UID: \"d9879f20-7ec3-46f5-b58c-6f49e431d23f\") " pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.816098 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/763b096e-8072-4b12-9a49-1081568af0db-metrics-tls\") pod \"ingress-operator-5b745b69d9-ql4rw\" (UID: \"763b096e-8072-4b12-9a49-1081568af0db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.816728 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-serving-cert\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.816981 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/faadd380-e6cc-40da-9321-ea2c610c3580-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wwrj7\" (UID: \"faadd380-e6cc-40da-9321-ea2c610c3580\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.819296 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/2aa0a406-85d4-4158-a9b9-850224bfbbc5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-k7sgz\" (UID: \"2aa0a406-85d4-4158-a9b9-850224bfbbc5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.821272 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c1b17582-e568-4cba-8137-d502636a43cb-signing-key\") pod \"service-ca-9c57cc56f-f495f\" (UID: \"c1b17582-e568-4cba-8137-d502636a43cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-f495f" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.824730 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d919cc22-349d-44d4-9715-574a49338b02-node-bootstrap-token\") pod \"machine-config-server-ndxj2\" (UID: \"d919cc22-349d-44d4-9715-574a49338b02\") " pod="openshift-machine-config-operator/machine-config-server-ndxj2" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.818663 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/313fb5fa-63ee-4008-9e6c-94adc6fa6e67-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-x2wjc\" (UID: \"313fb5fa-63ee-4008-9e6c-94adc6fa6e67\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x2wjc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.828098 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d919cc22-349d-44d4-9715-574a49338b02-certs\") pod \"machine-config-server-ndxj2\" (UID: \"d919cc22-349d-44d4-9715-574a49338b02\") " pod="openshift-machine-config-operator/machine-config-server-ndxj2" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.828148 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/395d9f95-321c-4af1-8ab5-681e84ccbae1-webhook-cert\") pod \"packageserver-d55dfcdfc-9nchn\" (UID: \"395d9f95-321c-4af1-8ab5-681e84ccbae1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.828449 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6efa0abf-efcc-4549-8c82-970b3c150dea-srv-cert\") pod \"olm-operator-6b444d44fb-rkdvv\" (UID: \"6efa0abf-efcc-4549-8c82-970b3c150dea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.828693 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0684628c-961e-4847-8396-de662a9e62a3-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-nc6fh\" (UID: \"0684628c-961e-4847-8396-de662a9e62a3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.828856 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0c082a26-a425-4d2d-b547-f74498853a6b-srv-cert\") pod \"catalog-operator-68c6474976-7j587\" (UID: \"0c082a26-a425-4d2d-b547-f74498853a6b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.830643 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-etcd-client\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.832161 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ace4475c-748f-4519-8f32-a70468fb9ee5-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-7zgzs\" (UID: \"ace4475c-748f-4519-8f32-a70468fb9ee5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.832180 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.832856 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-bound-sa-token\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.833021 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/faadd380-e6cc-40da-9321-ea2c610c3580-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wwrj7\" (UID: \"faadd380-e6cc-40da-9321-ea2c610c3580\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.834503 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/6efa0abf-efcc-4549-8c82-970b3c150dea-profile-collector-cert\") pod \"olm-operator-6b444d44fb-rkdvv\" (UID: \"6efa0abf-efcc-4549-8c82-970b3c150dea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.835992 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d9879f20-7ec3-46f5-b58c-6f49e431d23f-default-certificate\") pod \"router-default-5444994796-bnsn8\" (UID: \"d9879f20-7ec3-46f5-b58c-6f49e431d23f\") " pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.836651 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bd2d3121-daf0-45ca-b5bd-f78fb878d6e3-cert\") pod \"ingress-canary-xp9h9\" (UID: \"bd2d3121-daf0-45ca-b5bd-f78fb878d6e3\") " pod="openshift-ingress-canary/ingress-canary-xp9h9" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.837579 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzqp6\" (UniqueName: \"kubernetes.io/projected/c0008f36-c407-4f88-9da3-55a32f23bf4d-kube-api-access-kzqp6\") pod \"controller-manager-879f6c89f-lc28t\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.837935 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p6k2\" (UniqueName: \"kubernetes.io/projected/e263507d-26bb-4417-80fb-24a64dda98f5-kube-api-access-6p6k2\") pod \"machine-config-controller-84d6567774-mn9sc\" (UID: \"e263507d-26bb-4417-80fb-24a64dda98f5\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.838220 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5523855c-574f-47f3-8e9a-8ddbde1a35f6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-nxh5j\" (UID: \"5523855c-574f-47f3-8e9a-8ddbde1a35f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nxh5j" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.839566 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/24678df2-62ec-4c8d-8f94-91ae16c8fa04-proxy-tls\") pod \"machine-config-operator-74547568cd-6cbf4\" (UID: \"24678df2-62ec-4c8d-8f94-91ae16c8fa04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.862149 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cl8rz" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.871790 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.883788 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7l6t\" (UniqueName: \"kubernetes.io/projected/8c8844ca-8b4a-4507-af56-255af25c0fdc-kube-api-access-n7l6t\") pod \"openshift-controller-manager-operator-756b6f6bc6-rm54g\" (UID: \"8c8844ca-8b4a-4507-af56-255af25c0fdc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.886102 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzdqh\" (UniqueName: \"kubernetes.io/projected/313fb5fa-63ee-4008-9e6c-94adc6fa6e67-kube-api-access-rzdqh\") pod \"control-plane-machine-set-operator-78cbb6b69f-x2wjc\" (UID: \"313fb5fa-63ee-4008-9e6c-94adc6fa6e67\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x2wjc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.902021 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:40 crc kubenswrapper[4981]: E0128 15:05:40.902788 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:41.402768986 +0000 UTC m=+152.854927217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.911565 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0684628c-961e-4847-8396-de662a9e62a3-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-nc6fh\" (UID: \"0684628c-961e-4847-8396-de662a9e62a3\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.929434 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-v9xj2"] Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.935101 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46w9j\" (UniqueName: \"kubernetes.io/projected/c63cf8f0-2f23-4132-8829-fea088becd6f-kube-api-access-46w9j\") pod \"service-ca-operator-777779d784-qrcvg\" (UID: \"c63cf8f0-2f23-4132-8829-fea088becd6f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.936097 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4llfp\" (UniqueName: \"kubernetes.io/projected/6efa0abf-efcc-4549-8c82-970b3c150dea-kube-api-access-4llfp\") pod \"olm-operator-6b444d44fb-rkdvv\" (UID: \"6efa0abf-efcc-4549-8c82-970b3c150dea\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" Jan 28 15:05:40 crc kubenswrapper[4981]: W0128 15:05:40.950106 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea96a27d_8cce_4f3a_b634_97c6c0693dfb.slice/crio-38b3c0d0cc341842594986e81758eed12e1aee74ba93db9e7629a1f899b00a33 WatchSource:0}: Error finding container 38b3c0d0cc341842594986e81758eed12e1aee74ba93db9e7629a1f899b00a33: Status 404 returned error can't find the container with id 38b3c0d0cc341842594986e81758eed12e1aee74ba93db9e7629a1f899b00a33 Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.966380 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67zwq\" (UniqueName: \"kubernetes.io/projected/c1b17582-e568-4cba-8137-d502636a43cb-kube-api-access-67zwq\") pod \"service-ca-9c57cc56f-f495f\" (UID: \"c1b17582-e568-4cba-8137-d502636a43cb\") " pod="openshift-service-ca/service-ca-9c57cc56f-f495f" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.975461 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xnv4\" (UniqueName: \"kubernetes.io/projected/5523855c-574f-47f3-8e9a-8ddbde1a35f6-kube-api-access-5xnv4\") pod \"multus-admission-controller-857f4d67dd-nxh5j\" (UID: \"5523855c-574f-47f3-8e9a-8ddbde1a35f6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-nxh5j" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.994272 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x2wjc" Jan 28 15:05:40 crc kubenswrapper[4981]: I0128 15:05:40.998746 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.000978 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faadd380-e6cc-40da-9321-ea2c610c3580-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wwrj7\" (UID: \"faadd380-e6cc-40da-9321-ea2c610c3580\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.002456 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g"] Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.003799 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:41 crc kubenswrapper[4981]: E0128 15:05:41.004340 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:41.504323776 +0000 UTC m=+152.956482017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.024074 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.036729 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-f495f" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.044806 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q44t\" (UniqueName: \"kubernetes.io/projected/2aa0a406-85d4-4158-a9b9-850224bfbbc5-kube-api-access-9q44t\") pod \"package-server-manager-789f6589d5-k7sgz\" (UID: \"2aa0a406-85d4-4158-a9b9-850224bfbbc5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.067051 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kbmx\" (UniqueName: \"kubernetes.io/projected/d9879f20-7ec3-46f5-b58c-6f49e431d23f-kube-api-access-7kbmx\") pod \"router-default-5444994796-bnsn8\" (UID: \"d9879f20-7ec3-46f5-b58c-6f49e431d23f\") " pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.067143 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc8s2\" (UniqueName: \"kubernetes.io/projected/d919cc22-349d-44d4-9715-574a49338b02-kube-api-access-bc8s2\") pod \"machine-config-server-ndxj2\" (UID: \"d919cc22-349d-44d4-9715-574a49338b02\") " pod="openshift-machine-config-operator/machine-config-server-ndxj2" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.073693 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ndxj2" Jan 28 15:05:41 crc kubenswrapper[4981]: W0128 15:05:41.074244 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6ff8d77_eccc_4485_bca4_04baf87fb060.slice/crio-fb9bf3be0435bcb1ce446513abc28ce197426b7e5e1d27cbd5552361d0d3f641 WatchSource:0}: Error finding container fb9bf3be0435bcb1ce446513abc28ce197426b7e5e1d27cbd5552361d0d3f641: Status 404 returned error can't find the container with id fb9bf3be0435bcb1ce446513abc28ce197426b7e5e1d27cbd5552361d0d3f641 Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.074634 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ace4475c-748f-4519-8f32-a70468fb9ee5-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-7zgzs\" (UID: \"ace4475c-748f-4519-8f32-a70468fb9ee5\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.096106 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grwdm\" (UniqueName: \"kubernetes.io/projected/9ef8120f-cd78-42bc-9838-3449fa4cdcd0-kube-api-access-grwdm\") pod \"etcd-operator-b45778765-tml6p\" (UID: \"9ef8120f-cd78-42bc-9838-3449fa4cdcd0\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.099324 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.105526 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:41 crc kubenswrapper[4981]: E0128 15:05:41.106233 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:41.606204833 +0000 UTC m=+153.058363074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.108954 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.118547 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thmbb\" (UniqueName: \"kubernetes.io/projected/8596ade7-4f4d-4f58-acb8-400366812372-kube-api-access-thmbb\") pod \"csi-hostpathplugin-qdt5k\" (UID: \"8596ade7-4f4d-4f58-acb8-400366812372\") " pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.135616 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj4wm\" (UniqueName: \"kubernetes.io/projected/eae035e4-8b4b-4ff2-8be5-a68d742677f0-kube-api-access-nj4wm\") pod \"kube-storage-version-migrator-operator-b67b599dd-8wqg4\" (UID: \"eae035e4-8b4b-4ff2-8be5-a68d742677f0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.156857 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mcxk\" (UniqueName: \"kubernetes.io/projected/ed4c6fed-3e17-40a9-b844-adc144028848-kube-api-access-9mcxk\") pod \"collect-profiles-29493540-78fzf\" (UID: \"ed4c6fed-3e17-40a9-b844-adc144028848\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.176220 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.176530 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lr8j\" (UniqueName: \"kubernetes.io/projected/bd2d3121-daf0-45ca-b5bd-f78fb878d6e3-kube-api-access-7lr8j\") pod \"ingress-canary-xp9h9\" (UID: \"bd2d3121-daf0-45ca-b5bd-f78fb878d6e3\") " pod="openshift-ingress-canary/ingress-canary-xp9h9" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.184506 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.197379 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.207503 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:41 crc kubenswrapper[4981]: E0128 15:05:41.207816 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:41.707802483 +0000 UTC m=+153.159960724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.211477 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.215028 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m4fz\" (UniqueName: \"kubernetes.io/projected/a2889542-ffb9-4af8-8f77-ccfd601dec88-kube-api-access-5m4fz\") pod \"marketplace-operator-79b997595-nskbs\" (UID: \"a2889542-ffb9-4af8-8f77-ccfd601dec88\") " pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.219846 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctwm6\" (UniqueName: \"kubernetes.io/projected/42bc389e-2653-4d2a-a220-6d8f81523991-kube-api-access-ctwm6\") pod \"dns-default-74zjr\" (UID: \"42bc389e-2653-4d2a-a220-6d8f81523991\") " pod="openshift-dns/dns-default-74zjr" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.241359 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc"] Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.247892 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cl8rz"] Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.257042 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s8fm\" (UniqueName: \"kubernetes.io/projected/763b096e-8072-4b12-9a49-1081568af0db-kube-api-access-6s8fm\") pod \"ingress-operator-5b745b69d9-ql4rw\" (UID: \"763b096e-8072-4b12-9a49-1081568af0db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.266424 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.272661 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-nxh5j" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.273881 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nmqx\" (UniqueName: \"kubernetes.io/projected/24678df2-62ec-4c8d-8f94-91ae16c8fa04-kube-api-access-5nmqx\") pod \"machine-config-operator-74547568cd-6cbf4\" (UID: \"24678df2-62ec-4c8d-8f94-91ae16c8fa04\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.278926 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.284512 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.293805 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/763b096e-8072-4b12-9a49-1081568af0db-bound-sa-token\") pod \"ingress-operator-5b745b69d9-ql4rw\" (UID: \"763b096e-8072-4b12-9a49-1081568af0db\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.303867 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.308438 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:41 crc kubenswrapper[4981]: E0128 15:05:41.308538 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:41.80850746 +0000 UTC m=+153.260665701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.308887 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:41 crc kubenswrapper[4981]: E0128 15:05:41.309343 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:41.809331721 +0000 UTC m=+153.261489962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.315981 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x5xb\" (UniqueName: \"kubernetes.io/projected/0c082a26-a425-4d2d-b547-f74498853a6b-kube-api-access-8x5xb\") pod \"catalog-operator-68c6474976-7j587\" (UID: \"0c082a26-a425-4d2d-b547-f74498853a6b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.327671 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.345394 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56frc\" (UniqueName: \"kubernetes.io/projected/395d9f95-321c-4af1-8ab5-681e84ccbae1-kube-api-access-56frc\") pod \"packageserver-d55dfcdfc-9nchn\" (UID: \"395d9f95-321c-4af1-8ab5-681e84ccbae1\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.345865 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-74zjr" Jan 28 15:05:41 crc kubenswrapper[4981]: W0128 15:05:41.356142 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd919cc22_349d_44d4_9715_574a49338b02.slice/crio-e24ad373140972c3eda38612eec22cf3ef75f99e8686c0047e34cc0e5fb0fc6a WatchSource:0}: Error finding container e24ad373140972c3eda38612eec22cf3ef75f99e8686c0047e34cc0e5fb0fc6a: Status 404 returned error can't find the container with id e24ad373140972c3eda38612eec22cf3ef75f99e8686c0047e34cc0e5fb0fc6a Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.358159 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-bj272"] Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.366778 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x2wjc"] Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.367817 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.383289 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-xp9h9" Jan 28 15:05:41 crc kubenswrapper[4981]: W0128 15:05:41.399655 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08a3b621_305d_4655_bee4_78dc9766a1d1.slice/crio-9ed1d7efa6cebda3a68725ed16114cc77581ab0325b31237f2289575adff6cab WatchSource:0}: Error finding container 9ed1d7efa6cebda3a68725ed16114cc77581ab0325b31237f2289575adff6cab: Status 404 returned error can't find the container with id 9ed1d7efa6cebda3a68725ed16114cc77581ab0325b31237f2289575adff6cab Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.410205 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:41 crc kubenswrapper[4981]: E0128 15:05:41.410746 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:41.910727006 +0000 UTC m=+153.362885247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.440055 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-f495f"] Jan 28 15:05:41 crc kubenswrapper[4981]: W0128 15:05:41.450087 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod313fb5fa_63ee_4008_9e6c_94adc6fa6e67.slice/crio-4da6b89a66b19b8b3317348166458641aebf694a87f608116eba32764d4e649a WatchSource:0}: Error finding container 4da6b89a66b19b8b3317348166458641aebf694a87f608116eba32764d4e649a: Status 404 returned error can't find the container with id 4da6b89a66b19b8b3317348166458641aebf694a87f608116eba32764d4e649a Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.492550 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.493029 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv"] Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.493327 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" event={"ID":"d6ff8d77-eccc-4485-bca4-04baf87fb060","Type":"ContainerStarted","Data":"fb9bf3be0435bcb1ce446513abc28ce197426b7e5e1d27cbd5552361d0d3f641"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.497109 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" event={"ID":"319c2ac2-dec9-4935-ae29-bc9b663d9820","Type":"ContainerStarted","Data":"1b9152b75844fdec19c4f9e4c60eb3f4ca480efb492a5fdeec042c120660c499"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.497140 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" event={"ID":"319c2ac2-dec9-4935-ae29-bc9b663d9820","Type":"ContainerStarted","Data":"e786866a7662c95e616831135314251e218a4af55b6f86867ffeadd062e9a90d"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.498770 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ndxj2" event={"ID":"d919cc22-349d-44d4-9715-574a49338b02","Type":"ContainerStarted","Data":"e24ad373140972c3eda38612eec22cf3ef75f99e8686c0047e34cc0e5fb0fc6a"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.500760 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x2wjc" event={"ID":"313fb5fa-63ee-4008-9e6c-94adc6fa6e67","Type":"ContainerStarted","Data":"4da6b89a66b19b8b3317348166458641aebf694a87f608116eba32764d4e649a"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.501970 4981 generic.go:334] "Generic (PLEG): container finished" podID="ab58fe84-53f7-4d26-9606-364d442d40d2" containerID="fac941052798f584cc03c8d0737cf8d6a6f5ea10a41759275dcf80951765bdaa" exitCode=0 Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.502001 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" event={"ID":"ab58fe84-53f7-4d26-9606-364d442d40d2","Type":"ContainerDied","Data":"fac941052798f584cc03c8d0737cf8d6a6f5ea10a41759275dcf80951765bdaa"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.504463 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.511629 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v9xj2" event={"ID":"ea96a27d-8cce-4f3a-b634-97c6c0693dfb","Type":"ContainerStarted","Data":"4344dbcdacb7f5bf1f222f3f7597a7f2b80454a9723badd945db09c7b58413f1"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.511686 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v9xj2" event={"ID":"ea96a27d-8cce-4f3a-b634-97c6c0693dfb","Type":"ContainerStarted","Data":"38b3c0d0cc341842594986e81758eed12e1aee74ba93db9e7629a1f899b00a33"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.512279 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:41 crc kubenswrapper[4981]: E0128 15:05:41.512614 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:42.012600493 +0000 UTC m=+153.464758734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.517967 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bj272" event={"ID":"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6","Type":"ContainerStarted","Data":"e8c2982b2a751369b7d1591ff3e67ce1fb06e9ab934e6933ed9ea933f80e4535"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.526219 4981 generic.go:334] "Generic (PLEG): container finished" podID="161e9b26-1b52-43e8-90a1-5dae906eec38" containerID="53eb709eb452cf15dd761ecaa45890d1a33660ea30a2117d11cce3f8a2795adb" exitCode=0 Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.526565 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" event={"ID":"161e9b26-1b52-43e8-90a1-5dae906eec38","Type":"ContainerDied","Data":"53eb709eb452cf15dd761ecaa45890d1a33660ea30a2117d11cce3f8a2795adb"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.531000 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cl8rz" event={"ID":"08a3b621-305d-4655-bee4-78dc9766a1d1","Type":"ContainerStarted","Data":"9ed1d7efa6cebda3a68725ed16114cc77581ab0325b31237f2289575adff6cab"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.557828 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" event={"ID":"200de941-a8aa-4930-a959-553869b8a2d0","Type":"ContainerStarted","Data":"dc1a18a58244c1e2deae9c36ed9771841c97e8d14d067ff804171360ff777150"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.558852 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.577934 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xq5cv" event={"ID":"7a295e4c-ed2b-4d54-8b74-2901caa05143","Type":"ContainerStarted","Data":"f7e21aad4d70d990b20e0f41ff8a34a3a225f2e55ce449140e7b07fba5ceaa35"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.578004 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xq5cv" event={"ID":"7a295e4c-ed2b-4d54-8b74-2901caa05143","Type":"ContainerStarted","Data":"358d3ff5e82ed828649d59ec4f15c7d7276719679a445766fb5ee25ba4d4964e"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.578877 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xq5cv" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.589876 4981 patch_prober.go:28] interesting pod/downloads-7954f5f757-xq5cv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.589947 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xq5cv" podUID="7a295e4c-ed2b-4d54-8b74-2901caa05143" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.592065 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8md5t" event={"ID":"5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7","Type":"ContainerStarted","Data":"6d6bc9c1b79ecff30fc2da57c6838f1b67be5ccd94bba6f1b1cc4fef2349fd62"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.592112 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-8md5t" event={"ID":"5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7","Type":"ContainerStarted","Data":"ac10c400aa742f9ac1805d0da212bfd40d1c4aad78f4323849e2680d31878573"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.593305 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.604405 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz" event={"ID":"28197d7d-0557-4f1b-822c-2de0acf2e094","Type":"ContainerStarted","Data":"9f83d062173b95be590f99ed43ecc4fde530acae4165af0ae293c412a2c08ca5"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.625165 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.625266 4981 patch_prober.go:28] interesting pod/console-operator-58897d9998-8md5t container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.625304 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-8md5t" podUID="5b4c7f18-93bf-4edb-a8e4-226b9b2a02b7" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.625853 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.626330 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc" event={"ID":"e263507d-26bb-4417-80fb-24a64dda98f5","Type":"ContainerStarted","Data":"e51576730693666fa8e4e02136011bc19dfa46d1c3e5eb37b4e7d9bfdb8b1f6b"} Jan 28 15:05:41 crc kubenswrapper[4981]: E0128 15:05:41.626566 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:42.126547401 +0000 UTC m=+153.578705642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.629222 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc28t"] Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.645423 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" event={"ID":"7bc7864e-dc24-4885-b829-e9ee56d0bb2a","Type":"ContainerStarted","Data":"d7f40f93363532334ece5f59bcb7fc0b2520349121cacce30153d68e48305140"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.645466 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" event={"ID":"7bc7864e-dc24-4885-b829-e9ee56d0bb2a","Type":"ContainerStarted","Data":"1c58fdb178df8ff1856e937d7878d3fccef90c0e89d2691709dfc07f7b27016d"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.645475 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" event={"ID":"7bc7864e-dc24-4885-b829-e9ee56d0bb2a","Type":"ContainerStarted","Data":"b6b466da31be0bc338a1103fcd71a09fb9bfce93d499321b1fe900b1d21bff5e"} Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.650421 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.728705 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.731067 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg"] Jan 28 15:05:41 crc kubenswrapper[4981]: E0128 15:05:41.731402 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:42.231387982 +0000 UTC m=+153.683546223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.839899 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:41 crc kubenswrapper[4981]: E0128 15:05:41.845552 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:42.345524785 +0000 UTC m=+153.797683026 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.910347 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g"] Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.943666 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:41 crc kubenswrapper[4981]: I0128 15:05:41.949981 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs"] Jan 28 15:05:41 crc kubenswrapper[4981]: E0128 15:05:41.950329 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:42.450306904 +0000 UTC m=+153.902465145 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.050286 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:42 crc kubenswrapper[4981]: E0128 15:05:42.051961 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:42.551926355 +0000 UTC m=+154.004084596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.128311 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tml6p"] Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.153262 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:42 crc kubenswrapper[4981]: E0128 15:05:42.153782 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:42.653764902 +0000 UTC m=+154.105923143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.255015 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:42 crc kubenswrapper[4981]: E0128 15:05:42.255402 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:42.755381312 +0000 UTC m=+154.207539553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.357392 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:42 crc kubenswrapper[4981]: E0128 15:05:42.357874 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:42.857855464 +0000 UTC m=+154.310013705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.460256 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:42 crc kubenswrapper[4981]: E0128 15:05:42.460847 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:42.960827689 +0000 UTC m=+154.412985930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.562284 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:42 crc kubenswrapper[4981]: E0128 15:05:42.562867 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:43.0628424 +0000 UTC m=+154.515000641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.619318 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-rb46f" podStartSLOduration=128.619293226 podStartE2EDuration="2m8.619293226s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:42.618136127 +0000 UTC m=+154.070294378" watchObservedRunningTime="2026-01-28 15:05:42.619293226 +0000 UTC m=+154.071451467" Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.644512 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-vc85q" podStartSLOduration=128.644473633 podStartE2EDuration="2m8.644473633s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:42.64394986 +0000 UTC m=+154.096108101" watchObservedRunningTime="2026-01-28 15:05:42.644473633 +0000 UTC m=+154.096631874" Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.664060 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:42 crc kubenswrapper[4981]: E0128 15:05:42.664323 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:43.164284446 +0000 UTC m=+154.616442687 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.664457 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:42 crc kubenswrapper[4981]: E0128 15:05:42.664829 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:43.164814809 +0000 UTC m=+154.616973050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.672012 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-nxh5j"] Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.705243 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" event={"ID":"200de941-a8aa-4930-a959-553869b8a2d0","Type":"ContainerStarted","Data":"61a0c2a5bd23f2fda71f61874858bde5400ecb45987d2943dc3767dacb900614"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.707055 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.744759 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" event={"ID":"6efa0abf-efcc-4549-8c82-970b3c150dea","Type":"ContainerStarted","Data":"b3b7df1345234785f84ae8db246e8d10d1dab69ad1b0fae6b1f90e1145bd7658"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.745069 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" event={"ID":"6efa0abf-efcc-4549-8c82-970b3c150dea","Type":"ContainerStarted","Data":"b2484fdfc7230fc6e7145d99e80f8dae49563f74c2638601a7cf85ef69a3db09"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.749595 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.760582 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" podStartSLOduration=127.760551204 podStartE2EDuration="2m7.760551204s" podCreationTimestamp="2026-01-28 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:42.717925942 +0000 UTC m=+154.170084183" watchObservedRunningTime="2026-01-28 15:05:42.760551204 +0000 UTC m=+154.212709445" Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.766171 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh"] Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.766302 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4"] Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.766330 4981 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rkdvv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.766384 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" podUID="6efa0abf-efcc-4549-8c82-970b3c150dea" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.767227 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:42 crc kubenswrapper[4981]: E0128 15:05:42.767940 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:43.267914137 +0000 UTC m=+154.720072378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.796491 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs" event={"ID":"ace4475c-748f-4519-8f32-a70468fb9ee5","Type":"ContainerStarted","Data":"280be09379be161d91b80e32f05d00398ee222982dd3245ded076fefb6987501"} Jan 28 15:05:42 crc kubenswrapper[4981]: W0128 15:05:42.810382 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0684628c_961e_4847_8396_de662a9e62a3.slice/crio-ab4081e568917a6528df95843c0c394df38b0d91a424e24a4f17be4701c5b4c0 WatchSource:0}: Error finding container ab4081e568917a6528df95843c0c394df38b0d91a424e24a4f17be4701c5b4c0: Status 404 returned error can't find the container with id ab4081e568917a6528df95843c0c394df38b0d91a424e24a4f17be4701c5b4c0 Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.822822 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g" event={"ID":"8c8844ca-8b4a-4507-af56-255af25c0fdc","Type":"ContainerStarted","Data":"a745c3379716a0f99da2d5f36727b5a07bdd1c36bcb15995e44cf22c7f0f604c"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.838371 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ndxj2" event={"ID":"d919cc22-349d-44d4-9715-574a49338b02","Type":"ContainerStarted","Data":"c8c734a595658d0681c05baa0c3516c3bebca8dc39e388c8503298cc7d709215"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.854844 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" event={"ID":"ab58fe84-53f7-4d26-9606-364d442d40d2","Type":"ContainerStarted","Data":"fb4a951142dd56ccdf2b31472e7e18c9434d7e9937a9074679d676f09cb1fb4b"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.855659 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.861968 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" event={"ID":"319c2ac2-dec9-4935-ae29-bc9b663d9820","Type":"ContainerStarted","Data":"de19c14e05523e05b7d5fc59124cae1a8ea6d1d16453b4581c543ed81e3d39db"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.865173 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg" event={"ID":"c63cf8f0-2f23-4132-8829-fea088becd6f","Type":"ContainerStarted","Data":"5093f4e6254991d11d4d959e08d4ce533c9cd3f498973ad7c94b949e1d9e5fe9"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.865245 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg" event={"ID":"c63cf8f0-2f23-4132-8829-fea088becd6f","Type":"ContainerStarted","Data":"09012ce728f5d90f3d0cca42f52e38784d565e693b51ec725f3024753fa431fd"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.876082 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:42 crc kubenswrapper[4981]: E0128 15:05:42.878066 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:43.37805068 +0000 UTC m=+154.830208931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.885643 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" event={"ID":"c0008f36-c407-4f88-9da3-55a32f23bf4d","Type":"ContainerStarted","Data":"14db3061d60b528e8b8b57b0931b3f7ec301ebe67cb57e1840b435e7975f429b"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.888308 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-bnsn8" event={"ID":"d9879f20-7ec3-46f5-b58c-6f49e431d23f","Type":"ContainerStarted","Data":"c55ded72b329d8c40872f0287336c9489c3ba37ef7f1dfe52732bb75fc86e720"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.898854 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v9xj2" event={"ID":"ea96a27d-8cce-4f3a-b634-97c6c0693dfb","Type":"ContainerStarted","Data":"2741d3a13d830b68a3b92b7937d94146a009d29fb8b8c5324386f16bda8dab0e"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.903518 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" event={"ID":"9ef8120f-cd78-42bc-9838-3449fa4cdcd0","Type":"ContainerStarted","Data":"197f42aba23d685241e23e5b4d6b594345152dc443b942bd99c3c1b40f70abc4"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.905134 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f495f" event={"ID":"c1b17582-e568-4cba-8137-d502636a43cb","Type":"ContainerStarted","Data":"5bda0b16e59e63a4023b808100e1047449973a499307c41082f59f1540206944"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.905162 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-f495f" event={"ID":"c1b17582-e568-4cba-8137-d502636a43cb","Type":"ContainerStarted","Data":"2329ce31c8f950af140de42dd8252dbb6ffbc51770747e241b162ea2f65b0658"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.907449 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x2wjc" event={"ID":"313fb5fa-63ee-4008-9e6c-94adc6fa6e67","Type":"ContainerStarted","Data":"e448c8457be172a7dc77515e40c02b7c32b48278dd88e3684f22b2b5fc7545b5"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.909203 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" event={"ID":"d6ff8d77-eccc-4485-bca4-04baf87fb060","Type":"ContainerStarted","Data":"3b8d3fef8631ce67be03676f56b7a6128574d173cbf5d916a64206bb3a1f23d9"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.913204 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-6xthx" podStartSLOduration=128.913169905 podStartE2EDuration="2m8.913169905s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:42.913061242 +0000 UTC m=+154.365219483" watchObservedRunningTime="2026-01-28 15:05:42.913169905 +0000 UTC m=+154.365328146" Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.918446 4981 patch_prober.go:28] interesting pod/downloads-7954f5f757-xq5cv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.918507 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xq5cv" podUID="7a295e4c-ed2b-4d54-8b74-2901caa05143" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.919002 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc" event={"ID":"e263507d-26bb-4417-80fb-24a64dda98f5","Type":"ContainerStarted","Data":"dca49333628f687e5023f5790c4089ea1c836a9a15aceac402dde2308bd858e5"} Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.977322 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-8md5t" Jan 28 15:05:42 crc kubenswrapper[4981]: I0128 15:05:42.979851 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:42 crc kubenswrapper[4981]: E0128 15:05:42.981291 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:43.48126063 +0000 UTC m=+154.933418871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.011608 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-rv5bz" podStartSLOduration=129.011583526 podStartE2EDuration="2m9.011583526s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:42.980177713 +0000 UTC m=+154.432335954" watchObservedRunningTime="2026-01-28 15:05:43.011583526 +0000 UTC m=+154.463741767" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.052579 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-xq5cv" podStartSLOduration=129.052555456 podStartE2EDuration="2m9.052555456s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:43.047368637 +0000 UTC m=+154.499526878" watchObservedRunningTime="2026-01-28 15:05:43.052555456 +0000 UTC m=+154.504713697" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.073544 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7"] Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.083622 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:43 crc kubenswrapper[4981]: E0128 15:05:43.088019 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:43.587984318 +0000 UTC m=+155.040142559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.088411 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-xp9h9"] Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.129111 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qdt5k"] Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.134958 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf"] Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.186313 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:43 crc kubenswrapper[4981]: E0128 15:05:43.186526 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:43.686483482 +0000 UTC m=+155.138641723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.186621 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:43 crc kubenswrapper[4981]: E0128 15:05:43.187016 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:43.686999874 +0000 UTC m=+155.139158105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.191981 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-74zjr"] Jan 28 15:05:43 crc kubenswrapper[4981]: W0128 15:05:43.198794 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8596ade7_4f4d_4f58_acb8_400366812372.slice/crio-022706a5a7bbd84cba797adee7763d07d1ae6380efdd2bb0d2abe5131d1a4b9c WatchSource:0}: Error finding container 022706a5a7bbd84cba797adee7763d07d1ae6380efdd2bb0d2abe5131d1a4b9c: Status 404 returned error can't find the container with id 022706a5a7bbd84cba797adee7763d07d1ae6380efdd2bb0d2abe5131d1a4b9c Jan 28 15:05:43 crc kubenswrapper[4981]: W0128 15:05:43.206451 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42bc389e_2653_4d2a_a220_6d8f81523991.slice/crio-1790b1f4f2d1f170d212d557a8f1fcf4c689f8bc8fcafae9f02c36ed9ea0cd2f WatchSource:0}: Error finding container 1790b1f4f2d1f170d212d557a8f1fcf4c689f8bc8fcafae9f02c36ed9ea0cd2f: Status 404 returned error can't find the container with id 1790b1f4f2d1f170d212d557a8f1fcf4c689f8bc8fcafae9f02c36ed9ea0cd2f Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.213829 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw"] Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.287975 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:43 crc kubenswrapper[4981]: E0128 15:05:43.288235 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:43.788202615 +0000 UTC m=+155.240360856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.288317 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:43 crc kubenswrapper[4981]: E0128 15:05:43.288732 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:43.788716328 +0000 UTC m=+155.240874569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.316727 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nskbs"] Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.334500 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4"] Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.358069 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-8md5t" podStartSLOduration=129.358047894 podStartE2EDuration="2m9.358047894s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:43.353893821 +0000 UTC m=+154.806052052" watchObservedRunningTime="2026-01-28 15:05:43.358047894 +0000 UTC m=+154.810206135" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.390167 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587"] Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.390386 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:43 crc kubenswrapper[4981]: E0128 15:05:43.391118 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:43.891078467 +0000 UTC m=+155.343236718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.394547 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:43 crc kubenswrapper[4981]: E0128 15:05:43.395121 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:43.895079247 +0000 UTC m=+155.347237488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.395870 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn"] Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.416487 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz"] Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.417827 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-t8dqm" podStartSLOduration=129.417795422 podStartE2EDuration="2m9.417795422s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:43.389024026 +0000 UTC m=+154.841182267" watchObservedRunningTime="2026-01-28 15:05:43.417795422 +0000 UTC m=+154.869953663" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.424336 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:05:43 crc kubenswrapper[4981]: W0128 15:05:43.448170 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2aa0a406_85d4_4158_a9b9_850224bfbbc5.slice/crio-0d24d3b60487c3800165429654c36a2aa619937496e2687c93c94f654611fe91 WatchSource:0}: Error finding container 0d24d3b60487c3800165429654c36a2aa619937496e2687c93c94f654611fe91: Status 404 returned error can't find the container with id 0d24d3b60487c3800165429654c36a2aa619937496e2687c93c94f654611fe91 Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.491602 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-98q6g" podStartSLOduration=129.49157827 podStartE2EDuration="2m9.49157827s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:43.434666202 +0000 UTC m=+154.886824453" watchObservedRunningTime="2026-01-28 15:05:43.49157827 +0000 UTC m=+154.943736511" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.497826 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:43 crc kubenswrapper[4981]: E0128 15:05:43.498201 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:43.998160714 +0000 UTC m=+155.450318955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.523662 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-ndxj2" podStartSLOduration=5.523640298 podStartE2EDuration="5.523640298s" podCreationTimestamp="2026-01-28 15:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:43.521802753 +0000 UTC m=+154.973960994" watchObservedRunningTime="2026-01-28 15:05:43.523640298 +0000 UTC m=+154.975798539" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.590618 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" podStartSLOduration=128.590589926 podStartE2EDuration="2m8.590589926s" podCreationTimestamp="2026-01-28 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:43.549143874 +0000 UTC m=+155.001302115" watchObservedRunningTime="2026-01-28 15:05:43.590589926 +0000 UTC m=+155.042748167" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.598866 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:43 crc kubenswrapper[4981]: E0128 15:05:43.599369 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:44.099346864 +0000 UTC m=+155.551505105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.621949 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" podStartSLOduration=129.621931276 podStartE2EDuration="2m9.621931276s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:43.618244375 +0000 UTC m=+155.070402616" watchObservedRunningTime="2026-01-28 15:05:43.621931276 +0000 UTC m=+155.074089517" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.643266 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-gchkj" podStartSLOduration=129.643245797 podStartE2EDuration="2m9.643245797s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:43.641649677 +0000 UTC m=+155.093807918" watchObservedRunningTime="2026-01-28 15:05:43.643245797 +0000 UTC m=+155.095404038" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.679637 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-x2wjc" podStartSLOduration=129.679611683 podStartE2EDuration="2m9.679611683s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:43.677936041 +0000 UTC m=+155.130094302" watchObservedRunningTime="2026-01-28 15:05:43.679611683 +0000 UTC m=+155.131769924" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.701548 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:43 crc kubenswrapper[4981]: E0128 15:05:43.701773 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:44.201737184 +0000 UTC m=+155.653895425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.702015 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:43 crc kubenswrapper[4981]: E0128 15:05:43.702511 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:44.202501543 +0000 UTC m=+155.654659784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.712936 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qrcvg" podStartSLOduration=128.712913572 podStartE2EDuration="2m8.712913572s" podCreationTimestamp="2026-01-28 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:43.708668407 +0000 UTC m=+155.160826648" watchObservedRunningTime="2026-01-28 15:05:43.712913572 +0000 UTC m=+155.165071803" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.759544 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-v9xj2" podStartSLOduration=129.759522483 podStartE2EDuration="2m9.759522483s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:43.756492648 +0000 UTC m=+155.208650889" watchObservedRunningTime="2026-01-28 15:05:43.759522483 +0000 UTC m=+155.211680724" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.795701 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-f495f" podStartSLOduration=128.795669823 podStartE2EDuration="2m8.795669823s" podCreationTimestamp="2026-01-28 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:43.787390387 +0000 UTC m=+155.239548628" watchObservedRunningTime="2026-01-28 15:05:43.795669823 +0000 UTC m=+155.247828064" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.805232 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:43 crc kubenswrapper[4981]: E0128 15:05:43.805470 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:44.305441577 +0000 UTC m=+155.757599818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.805601 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:43 crc kubenswrapper[4981]: E0128 15:05:43.805938 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:44.305929709 +0000 UTC m=+155.758087950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.834722 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" podStartSLOduration=129.834691255 podStartE2EDuration="2m9.834691255s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:43.834428539 +0000 UTC m=+155.286586790" watchObservedRunningTime="2026-01-28 15:05:43.834691255 +0000 UTC m=+155.286849496" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.907094 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:43 crc kubenswrapper[4981]: E0128 15:05:43.907751 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:44.407729994 +0000 UTC m=+155.859888235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.921081 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-74zjr" event={"ID":"42bc389e-2653-4d2a-a220-6d8f81523991","Type":"ContainerStarted","Data":"1790b1f4f2d1f170d212d557a8f1fcf4c689f8bc8fcafae9f02c36ed9ea0cd2f"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.922499 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" event={"ID":"9ef8120f-cd78-42bc-9838-3449fa4cdcd0","Type":"ContainerStarted","Data":"5adc2aa4ad5b96e4ae893d4c6bd5e566e0ef041fe5c8b29e9da250627ec33030"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.924494 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" event={"ID":"a2889542-ffb9-4af8-8f77-ccfd601dec88","Type":"ContainerStarted","Data":"83d1563b931d07ae0ad36956d659836797a3f12e687f90c502db233620991e02"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.925701 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-bnsn8" event={"ID":"d9879f20-7ec3-46f5-b58c-6f49e431d23f","Type":"ContainerStarted","Data":"7fa2da514a1a2a6b44d2e15a4ccf075fbea2199885cbec8f2c514dccbb0e2e49"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.927458 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7" event={"ID":"faadd380-e6cc-40da-9321-ea2c610c3580","Type":"ContainerStarted","Data":"232e6780bc9819e422daa5325eb797b3e41a632cb93c72976fb7a12a4ae1a73d"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.927561 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7" event={"ID":"faadd380-e6cc-40da-9321-ea2c610c3580","Type":"ContainerStarted","Data":"06f2c32999718c20f655f36373b127bf334d7a33db6bb1f573ba1d0c524d4b0b"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.929486 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs" event={"ID":"ace4475c-748f-4519-8f32-a70468fb9ee5","Type":"ContainerStarted","Data":"46166b07fcb5931c7347545bfb0c58cb88324de84cddce7f8e04ad73fa071d9d"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.931124 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nxh5j" event={"ID":"5523855c-574f-47f3-8e9a-8ddbde1a35f6","Type":"ContainerStarted","Data":"d22b393de15dce1f969939ebc38db338344a573aefa238aa85bafbddcfa5a30a"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.931256 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nxh5j" event={"ID":"5523855c-574f-47f3-8e9a-8ddbde1a35f6","Type":"ContainerStarted","Data":"c4b51f4c59e095621ea6bdf6888bc9cb7d2b13b18326ed323d042c1ff5f6e298"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.934890 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" event={"ID":"8596ade7-4f4d-4f58-acb8-400366812372","Type":"ContainerStarted","Data":"022706a5a7bbd84cba797adee7763d07d1ae6380efdd2bb0d2abe5131d1a4b9c"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.938121 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4" event={"ID":"eae035e4-8b4b-4ff2-8be5-a68d742677f0","Type":"ContainerStarted","Data":"e3c7c65ebcf648d73d7e46e60695efc8cc8ee8e195dd4293872298e9775094cf"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.938161 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4" event={"ID":"eae035e4-8b4b-4ff2-8be5-a68d742677f0","Type":"ContainerStarted","Data":"c6da6ed83616000ab89dfe478fb4a752e03b931b612bbd54478b5d8dfdfdcd92"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.941505 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" event={"ID":"395d9f95-321c-4af1-8ab5-681e84ccbae1","Type":"ContainerStarted","Data":"90ebb4e6224ae9bf703e8ff0a268865c8e7211de7f9bdb5f7d158952d5ad73e6"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.943985 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-tml6p" podStartSLOduration=129.943965797 podStartE2EDuration="2m9.943965797s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:43.942244784 +0000 UTC m=+155.394403025" watchObservedRunningTime="2026-01-28 15:05:43.943965797 +0000 UTC m=+155.396124038" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.948588 4981 generic.go:334] "Generic (PLEG): container finished" podID="9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6" containerID="5401edfc0622c18692fbc3cdd1c9a40d29641422571831851a0f3d476aa41d7a" exitCode=0 Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.948721 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bj272" event={"ID":"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6","Type":"ContainerDied","Data":"5401edfc0622c18692fbc3cdd1c9a40d29641422571831851a0f3d476aa41d7a"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.953634 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cl8rz" event={"ID":"08a3b621-305d-4655-bee4-78dc9766a1d1","Type":"ContainerStarted","Data":"7c3c0a29313e0602c1b083dbafe5599c49e91c343788342355f5a34f5e025e93"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.962439 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh" event={"ID":"0684628c-961e-4847-8396-de662a9e62a3","Type":"ContainerStarted","Data":"846d41528e815bb8c3c812d9c9f43b765c64a26059ec7de0b28f955615fd6e4e"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.962492 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh" event={"ID":"0684628c-961e-4847-8396-de662a9e62a3","Type":"ContainerStarted","Data":"ab4081e568917a6528df95843c0c394df38b0d91a424e24a4f17be4701c5b4c0"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.970604 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-bnsn8" podStartSLOduration=129.97058605 podStartE2EDuration="2m9.97058605s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:43.96939357 +0000 UTC m=+155.421551811" watchObservedRunningTime="2026-01-28 15:05:43.97058605 +0000 UTC m=+155.422744291" Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.976856 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g" event={"ID":"8c8844ca-8b4a-4507-af56-255af25c0fdc","Type":"ContainerStarted","Data":"602c7c54b11539478e64c0e986b8741c7730eab00b380e64d4d75b55b728e23d"} Jan 28 15:05:43 crc kubenswrapper[4981]: I0128 15:05:43.984475 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" event={"ID":"161e9b26-1b52-43e8-90a1-5dae906eec38","Type":"ContainerStarted","Data":"b841a263ac4360e209e5b03c55a8ad3ea90a3d222042b9e0644db89324f355e0"} Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.002741 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-7zgzs" podStartSLOduration=130.00271973 podStartE2EDuration="2m10.00271973s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:44.001556681 +0000 UTC m=+155.453714922" watchObservedRunningTime="2026-01-28 15:05:44.00271973 +0000 UTC m=+155.454877971" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.009894 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:44 crc kubenswrapper[4981]: E0128 15:05:44.014207 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:44.514174885 +0000 UTC m=+155.966333126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.037842 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc" event={"ID":"e263507d-26bb-4417-80fb-24a64dda98f5","Type":"ContainerStarted","Data":"4091d67808f856ce27c04ba5d3cdff219e4b8688dc3f798e2bf312d7bb960dc7"} Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.041538 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" event={"ID":"c0008f36-c407-4f88-9da3-55a32f23bf4d","Type":"ContainerStarted","Data":"4dfe031091dda0dc6a7f00faea9dc75cd3e0a8ab2039932bfd5b02e580503193"} Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.042587 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.053696 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" event={"ID":"0c082a26-a425-4d2d-b547-f74498853a6b","Type":"ContainerStarted","Data":"e06602e1ed5c0335a9655b47febb39113a59f5178543228ad2ddc01f14d45773"} Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.083571 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz" event={"ID":"2aa0a406-85d4-4158-a9b9-850224bfbbc5","Type":"ContainerStarted","Data":"0d24d3b60487c3800165429654c36a2aa619937496e2687c93c94f654611fe91"} Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.091315 4981 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lc28t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.091381 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" podUID="c0008f36-c407-4f88-9da3-55a32f23bf4d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.101556 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" event={"ID":"763b096e-8072-4b12-9a49-1081568af0db","Type":"ContainerStarted","Data":"4db82ffcb4fb44c3c1718badb78635fb4ab74829f7db65d7dd29689251d9d296"} Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.101615 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" event={"ID":"763b096e-8072-4b12-9a49-1081568af0db","Type":"ContainerStarted","Data":"546f4ab0e204f677ac3262c92ac684d9351f53e1813b6022b90e910fdfc38fba"} Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.110359 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:44 crc kubenswrapper[4981]: E0128 15:05:44.111235 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:44.611215622 +0000 UTC m=+156.063373863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.112138 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-8wqg4" podStartSLOduration=130.112114954 podStartE2EDuration="2m10.112114954s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:44.066882848 +0000 UTC m=+155.519041109" watchObservedRunningTime="2026-01-28 15:05:44.112114954 +0000 UTC m=+155.564273195" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.120089 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" event={"ID":"24678df2-62ec-4c8d-8f94-91ae16c8fa04","Type":"ContainerStarted","Data":"6e14f9f6aa5428c8060069a461bef266e5a41903e1d8c09baa694632c9de9828"} Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.120145 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" event={"ID":"24678df2-62ec-4c8d-8f94-91ae16c8fa04","Type":"ContainerStarted","Data":"27be10c22319174937124b95cc959f061a6607b88eaa0fc6cd92446b49b70b1e"} Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.124379 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xp9h9" event={"ID":"bd2d3121-daf0-45ca-b5bd-f78fb878d6e3","Type":"ContainerStarted","Data":"643e98562a00c1ad1ecb8b3a1b7ada6dc3851dcd9edf4a67816b9b79a33752a6"} Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.124409 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-xp9h9" event={"ID":"bd2d3121-daf0-45ca-b5bd-f78fb878d6e3","Type":"ContainerStarted","Data":"54617cb7e0a3b3f8c87bda58146959da5f89a6c7597b61a1ec798e91eb67c78b"} Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.147753 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" event={"ID":"ed4c6fed-3e17-40a9-b844-adc144028848","Type":"ContainerStarted","Data":"f997bcef97eadacc863d035087f4cbcb25c94fc21b494676a224659f1514b8a5"} Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.147800 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" event={"ID":"ed4c6fed-3e17-40a9-b844-adc144028848","Type":"ContainerStarted","Data":"62952e8ef228483191c5ebc3a7c70d51fb5e1b25cadbfa158f091104441620e3"} Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.150025 4981 patch_prober.go:28] interesting pod/downloads-7954f5f757-xq5cv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.150080 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xq5cv" podUID="7a295e4c-ed2b-4d54-8b74-2901caa05143" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.151083 4981 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-rkdvv container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.151136 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" podUID="6efa0abf-efcc-4549-8c82-970b3c150dea" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.164440 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-nc6fh" podStartSLOduration=130.164408597 podStartE2EDuration="2m10.164408597s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:44.155931396 +0000 UTC m=+155.608089637" watchObservedRunningTime="2026-01-28 15:05:44.164408597 +0000 UTC m=+155.616566838" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.194079 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rm54g" podStartSLOduration=130.194044205 podStartE2EDuration="2m10.194044205s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:44.193970393 +0000 UTC m=+155.646128634" watchObservedRunningTime="2026-01-28 15:05:44.194044205 +0000 UTC m=+155.646202446" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.213550 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.215116 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.216220 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.216289 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 15:05:44 crc kubenswrapper[4981]: E0128 15:05:44.221730 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:44.716032643 +0000 UTC m=+156.168190874 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.246042 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" podStartSLOduration=129.246009749 podStartE2EDuration="2m9.246009749s" podCreationTimestamp="2026-01-28 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:44.238502112 +0000 UTC m=+155.690660353" watchObservedRunningTime="2026-01-28 15:05:44.246009749 +0000 UTC m=+155.698167990" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.296633 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mn9sc" podStartSLOduration=130.296611899 podStartE2EDuration="2m10.296611899s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:44.281876532 +0000 UTC m=+155.734034793" watchObservedRunningTime="2026-01-28 15:05:44.296611899 +0000 UTC m=+155.748770130" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.327606 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:44 crc kubenswrapper[4981]: E0128 15:05:44.329130 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:44.829103248 +0000 UTC m=+156.281261489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.401330 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" podStartSLOduration=130.401304107 podStartE2EDuration="2m10.401304107s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:44.360632804 +0000 UTC m=+155.812791045" watchObservedRunningTime="2026-01-28 15:05:44.401304107 +0000 UTC m=+155.853462348" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.402156 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" podStartSLOduration=130.402151388 podStartE2EDuration="2m10.402151388s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:44.4018197 +0000 UTC m=+155.853977931" watchObservedRunningTime="2026-01-28 15:05:44.402151388 +0000 UTC m=+155.854309629" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.452406 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:44 crc kubenswrapper[4981]: E0128 15:05:44.452850 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:44.95283448 +0000 UTC m=+156.404992721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.516544 4981 csr.go:261] certificate signing request csr-2djqv is approved, waiting to be issued Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.534747 4981 csr.go:257] certificate signing request csr-2djqv is issued Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.553663 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:44 crc kubenswrapper[4981]: E0128 15:05:44.553928 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:45.053886797 +0000 UTC m=+156.506045038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.554262 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:44 crc kubenswrapper[4981]: E0128 15:05:44.554714 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:45.054697777 +0000 UTC m=+156.506856018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.586672 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.655454 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:44 crc kubenswrapper[4981]: E0128 15:05:44.655855 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:45.155837396 +0000 UTC m=+156.607995637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.677217 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-xp9h9" podStartSLOduration=6.677178467 podStartE2EDuration="6.677178467s" podCreationTimestamp="2026-01-28 15:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:44.445929598 +0000 UTC m=+155.898087839" watchObservedRunningTime="2026-01-28 15:05:44.677178467 +0000 UTC m=+156.129336708" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.757446 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:44 crc kubenswrapper[4981]: E0128 15:05:44.758104 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:45.258081152 +0000 UTC m=+156.710239403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.812827 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.812899 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.815263 4981 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-zn7fg container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.15:8443/livez\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.815326 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" podUID="161e9b26-1b52-43e8-90a1-5dae906eec38" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.15:8443/livez\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.861508 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:44 crc kubenswrapper[4981]: E0128 15:05:44.861915 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:45.361882126 +0000 UTC m=+156.814040367 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:44 crc kubenswrapper[4981]: I0128 15:05:44.963013 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:44 crc kubenswrapper[4981]: E0128 15:05:44.963361 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:45.463348273 +0000 UTC m=+156.915506514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.064389 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:45 crc kubenswrapper[4981]: E0128 15:05:45.064767 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:45.564745429 +0000 UTC m=+157.016903660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.150752 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" event={"ID":"0c082a26-a425-4d2d-b547-f74498853a6b","Type":"ContainerStarted","Data":"f5b0d8203115c942f336dbf47f728c060cb66451ec23a01471b4b63de642fca8"} Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.155526 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-74zjr" event={"ID":"42bc389e-2653-4d2a-a220-6d8f81523991","Type":"ContainerStarted","Data":"1d5149d2da7d446dc043df48c6e8282f337060523cb14f646077b06c9ac3f475"} Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.155588 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-74zjr" event={"ID":"42bc389e-2653-4d2a-a220-6d8f81523991","Type":"ContainerStarted","Data":"02653f1d609020fcd01d60810d9822c5cf12346a5a68d6f4571588672460100d"} Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.156336 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-74zjr" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.158234 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz" event={"ID":"2aa0a406-85d4-4158-a9b9-850224bfbbc5","Type":"ContainerStarted","Data":"5f34a8fd2f34f842914dc28238a692a37ea00db98ae0c7ec4007632cbff80c93"} Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.158271 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz" event={"ID":"2aa0a406-85d4-4158-a9b9-850224bfbbc5","Type":"ContainerStarted","Data":"9e77481b8f7ad85cc5913f64ea3e781f62f3dfcbc4ccef21debc75224d6c9723"} Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.158740 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.161414 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" event={"ID":"395d9f95-321c-4af1-8ab5-681e84ccbae1","Type":"ContainerStarted","Data":"dcf5c32bdf3f62b2ab7237a821dd2ea16084b85f14172774ed007431a807a86c"} Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.162048 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.164870 4981 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-9nchn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.164954 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" podUID="395d9f95-321c-4af1-8ab5-681e84ccbae1" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.165680 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:45 crc kubenswrapper[4981]: E0128 15:05:45.166121 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:45.666099493 +0000 UTC m=+157.118257734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.171534 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bj272" event={"ID":"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6","Type":"ContainerStarted","Data":"1065647d1ce4eb3a7e76bb7aff4a3ceb8ca35e695c45e6849843166486480da1"} Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.174787 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" event={"ID":"a2889542-ffb9-4af8-8f77-ccfd601dec88","Type":"ContainerStarted","Data":"7833d7e5892341d81930b3592d51102624eb31f053a148cbdc058abc4de2cb5e"} Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.175874 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.178168 4981 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nskbs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.178230 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" podUID="a2889542-ffb9-4af8-8f77-ccfd601dec88" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.178677 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" podStartSLOduration=130.178649676 podStartE2EDuration="2m10.178649676s" podCreationTimestamp="2026-01-28 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:45.17439337 +0000 UTC m=+156.626551601" watchObservedRunningTime="2026-01-28 15:05:45.178649676 +0000 UTC m=+156.630807917" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.188675 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cl8rz" event={"ID":"08a3b621-305d-4655-bee4-78dc9766a1d1","Type":"ContainerStarted","Data":"c18ae1a18449c54592e17ae96913e09b0059aee09927996a8cc86c30b30c1921"} Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.194089 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-nxh5j" event={"ID":"5523855c-574f-47f3-8e9a-8ddbde1a35f6","Type":"ContainerStarted","Data":"53ae56858cda7079587bcc1e3712e7c0d653c281ab1c6c316c0ee9ac9739b436"} Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.202672 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" event={"ID":"763b096e-8072-4b12-9a49-1081568af0db","Type":"ContainerStarted","Data":"cd50ab8eff6451c48f8aed7f64810b53c931b94b7a034abdfe37be3afb51dca4"} Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.208968 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" event={"ID":"24678df2-62ec-4c8d-8f94-91ae16c8fa04","Type":"ContainerStarted","Data":"93053da17e885bbbe8728b1387f127f2352f9d9e88afe42719d0be04605f05b0"} Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.210830 4981 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-lc28t container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.210881 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" podUID="c0008f36-c407-4f88-9da3-55a32f23bf4d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.216064 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-74zjr" podStartSLOduration=7.216043147 podStartE2EDuration="7.216043147s" podCreationTimestamp="2026-01-28 15:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:45.212455208 +0000 UTC m=+156.664613449" watchObservedRunningTime="2026-01-28 15:05:45.216043147 +0000 UTC m=+156.668201388" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.218290 4981 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-zkh5j container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.218343 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" podUID="ab58fe84-53f7-4d26-9606-364d442d40d2" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.222668 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:05:45 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:05:45 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:05:45 crc kubenswrapper[4981]: healthz check failed Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.222729 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.241212 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz" podStartSLOduration=130.241178463 podStartE2EDuration="2m10.241178463s" podCreationTimestamp="2026-01-28 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:45.239159653 +0000 UTC m=+156.691317894" watchObservedRunningTime="2026-01-28 15:05:45.241178463 +0000 UTC m=+156.693336694" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.273668 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:45 crc kubenswrapper[4981]: E0128 15:05:45.280952 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:45.780920713 +0000 UTC m=+157.233078954 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.281934 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" podStartSLOduration=130.281893757 podStartE2EDuration="2m10.281893757s" podCreationTimestamp="2026-01-28 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:45.262376021 +0000 UTC m=+156.714534262" watchObservedRunningTime="2026-01-28 15:05:45.281893757 +0000 UTC m=+156.734052018" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.318552 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6cbf4" podStartSLOduration=131.318533749 podStartE2EDuration="2m11.318533749s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:45.315382781 +0000 UTC m=+156.767541022" watchObservedRunningTime="2026-01-28 15:05:45.318533749 +0000 UTC m=+156.770691990" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.354692 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" podStartSLOduration=130.354666759 podStartE2EDuration="2m10.354666759s" podCreationTimestamp="2026-01-28 15:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:45.353971262 +0000 UTC m=+156.806129493" watchObservedRunningTime="2026-01-28 15:05:45.354666759 +0000 UTC m=+156.806825000" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.377316 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:45 crc kubenswrapper[4981]: E0128 15:05:45.384767 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:45.884747989 +0000 UTC m=+157.336906420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.410965 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-nxh5j" podStartSLOduration=131.410947921 podStartE2EDuration="2m11.410947921s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:45.383916288 +0000 UTC m=+156.836074529" watchObservedRunningTime="2026-01-28 15:05:45.410947921 +0000 UTC m=+156.863106162" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.411514 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-ql4rw" podStartSLOduration=131.411511055 podStartE2EDuration="2m11.411511055s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:45.410157591 +0000 UTC m=+156.862315832" watchObservedRunningTime="2026-01-28 15:05:45.411511055 +0000 UTC m=+156.863669296" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.446942 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-cl8rz" podStartSLOduration=131.446923697 podStartE2EDuration="2m11.446923697s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:45.446418934 +0000 UTC m=+156.898577175" watchObservedRunningTime="2026-01-28 15:05:45.446923697 +0000 UTC m=+156.899081938" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.479962 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:45 crc kubenswrapper[4981]: E0128 15:05:45.480434 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:45.980411711 +0000 UTC m=+157.432569952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.483609 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wwrj7" podStartSLOduration=131.48358052 podStartE2EDuration="2m11.48358052s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:45.481129439 +0000 UTC m=+156.933287680" watchObservedRunningTime="2026-01-28 15:05:45.48358052 +0000 UTC m=+156.935738751" Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.536319 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-28 15:00:44 +0000 UTC, rotation deadline is 2026-10-18 11:08:38.160508919 +0000 UTC Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.536888 4981 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6308h2m52.623625622s for next certificate rotation Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.581430 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:45 crc kubenswrapper[4981]: E0128 15:05:45.581884 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.081861518 +0000 UTC m=+157.534019759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.683349 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:45 crc kubenswrapper[4981]: E0128 15:05:45.683679 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.183642722 +0000 UTC m=+157.635800963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.683791 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:45 crc kubenswrapper[4981]: E0128 15:05:45.684226 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.184214817 +0000 UTC m=+157.636373058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.785319 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:45 crc kubenswrapper[4981]: E0128 15:05:45.785548 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.285500929 +0000 UTC m=+157.737659170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.785740 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:45 crc kubenswrapper[4981]: E0128 15:05:45.786170 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.286159146 +0000 UTC m=+157.738317577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.886544 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:45 crc kubenswrapper[4981]: E0128 15:05:45.886801 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.386757871 +0000 UTC m=+157.838916122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.887051 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:45 crc kubenswrapper[4981]: E0128 15:05:45.887639 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.387614072 +0000 UTC m=+157.839772313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.988460 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:45 crc kubenswrapper[4981]: E0128 15:05:45.988706 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.488655919 +0000 UTC m=+157.940814160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:45 crc kubenswrapper[4981]: I0128 15:05:45.989068 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:45 crc kubenswrapper[4981]: E0128 15:05:45.989419 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.489405658 +0000 UTC m=+157.941563899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.090226 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:46 crc kubenswrapper[4981]: E0128 15:05:46.090474 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.590432303 +0000 UTC m=+158.042590604 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.090780 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:46 crc kubenswrapper[4981]: E0128 15:05:46.091132 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.59111547 +0000 UTC m=+158.043273711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.192153 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:46 crc kubenswrapper[4981]: E0128 15:05:46.192403 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.692360362 +0000 UTC m=+158.144518613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.192541 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:46 crc kubenswrapper[4981]: E0128 15:05:46.193005 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.692992758 +0000 UTC m=+158.145151169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.217749 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:05:46 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:05:46 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:05:46 crc kubenswrapper[4981]: healthz check failed Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.217818 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.220292 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-bj272" event={"ID":"9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6","Type":"ContainerStarted","Data":"0e610c60baf52186158be118a44a22bafa3960716eedbdbee15c3c9c66365cb4"} Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.223208 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" event={"ID":"8596ade7-4f4d-4f58-acb8-400366812372","Type":"ContainerStarted","Data":"3e719a18c29bff32a7a9753307dbdf258407a953dad1722ed16a75e67cccf7f3"} Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.224179 4981 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nskbs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.224335 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" podUID="a2889542-ffb9-4af8-8f77-ccfd601dec88" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.226497 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.241989 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.285861 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-7j587" Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.293484 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:46 crc kubenswrapper[4981]: E0128 15:05:46.296832 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.796803673 +0000 UTC m=+158.248961914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.396270 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:46 crc kubenswrapper[4981]: E0128 15:05:46.396684 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.89666925 +0000 UTC m=+158.348827481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.419519 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-bj272" podStartSLOduration=132.419496699 podStartE2EDuration="2m12.419496699s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:46.337027005 +0000 UTC m=+157.789185276" watchObservedRunningTime="2026-01-28 15:05:46.419496699 +0000 UTC m=+157.871654940" Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.497881 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:46 crc kubenswrapper[4981]: E0128 15:05:46.498143 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.998102056 +0000 UTC m=+158.450260297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.498547 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:46 crc kubenswrapper[4981]: E0128 15:05:46.498981 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:46.998970148 +0000 UTC m=+158.451128389 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.555541 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-zkh5j" Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.599988 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:46 crc kubenswrapper[4981]: E0128 15:05:46.600529 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:47.100496167 +0000 UTC m=+158.552654408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.702309 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:46 crc kubenswrapper[4981]: E0128 15:05:46.703574 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:47.203547823 +0000 UTC m=+158.655706064 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.803459 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:46 crc kubenswrapper[4981]: E0128 15:05:46.803802 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:47.303779199 +0000 UTC m=+158.755937430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:46 crc kubenswrapper[4981]: I0128 15:05:46.905599 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:46 crc kubenswrapper[4981]: E0128 15:05:46.906124 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:47.406098338 +0000 UTC m=+158.858256579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.007153 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:47 crc kubenswrapper[4981]: E0128 15:05:47.007742 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:47.507708598 +0000 UTC m=+158.959866839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.087763 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-9nchn" Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.109333 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:47 crc kubenswrapper[4981]: E0128 15:05:47.109842 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:47.609818521 +0000 UTC m=+159.061976762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.210473 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:47 crc kubenswrapper[4981]: E0128 15:05:47.210724 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:47.710683043 +0000 UTC m=+159.162841294 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.211470 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:47 crc kubenswrapper[4981]: E0128 15:05:47.212135 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:47.712109239 +0000 UTC m=+159.164267480 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.217270 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:05:47 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:05:47 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:05:47 crc kubenswrapper[4981]: healthz check failed Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.217327 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.231458 4981 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-nskbs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.231676 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" podUID="a2889542-ffb9-4af8-8f77-ccfd601dec88" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.312887 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:47 crc kubenswrapper[4981]: E0128 15:05:47.313106 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:47.813072333 +0000 UTC m=+159.265230574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.314992 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:47 crc kubenswrapper[4981]: E0128 15:05:47.315425 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:47.815414452 +0000 UTC m=+159.267572693 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.416317 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:47 crc kubenswrapper[4981]: E0128 15:05:47.416502 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:47.916473189 +0000 UTC m=+159.368631430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.416784 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:47 crc kubenswrapper[4981]: E0128 15:05:47.417157 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:47.917146485 +0000 UTC m=+159.369304726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.518522 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:47 crc kubenswrapper[4981]: E0128 15:05:47.518898 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:48.018877129 +0000 UTC m=+159.471035370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.620652 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:47 crc kubenswrapper[4981]: E0128 15:05:47.621072 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:48.121054774 +0000 UTC m=+159.573213015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.721924 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:47 crc kubenswrapper[4981]: E0128 15:05:47.722326 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:48.222305215 +0000 UTC m=+159.674463456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.722536 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9gspg"] Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.724093 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.727125 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.758103 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gspg"] Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.823178 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46mxf\" (UniqueName: \"kubernetes.io/projected/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-kube-api-access-46mxf\") pod \"community-operators-9gspg\" (UID: \"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1\") " pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.823265 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.823321 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-utilities\") pod \"community-operators-9gspg\" (UID: \"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1\") " pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.823344 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-catalog-content\") pod \"community-operators-9gspg\" (UID: \"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1\") " pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:05:47 crc kubenswrapper[4981]: E0128 15:05:47.823789 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:48.323775552 +0000 UTC m=+159.775933793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.924143 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:47 crc kubenswrapper[4981]: E0128 15:05:47.924458 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:48.424424359 +0000 UTC m=+159.876582600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.924712 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46mxf\" (UniqueName: \"kubernetes.io/projected/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-kube-api-access-46mxf\") pod \"community-operators-9gspg\" (UID: \"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1\") " pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.924762 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.924804 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-utilities\") pod \"community-operators-9gspg\" (UID: \"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1\") " pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.924819 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-catalog-content\") pod \"community-operators-9gspg\" (UID: \"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1\") " pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.925297 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-catalog-content\") pod \"community-operators-9gspg\" (UID: \"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1\") " pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:05:47 crc kubenswrapper[4981]: E0128 15:05:47.925976 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:48.425964108 +0000 UTC m=+159.878122339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.926391 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-utilities\") pod \"community-operators-9gspg\" (UID: \"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1\") " pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:05:47 crc kubenswrapper[4981]: I0128 15:05:47.982299 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46mxf\" (UniqueName: \"kubernetes.io/projected/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-kube-api-access-46mxf\") pod \"community-operators-9gspg\" (UID: \"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1\") " pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.026568 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:48 crc kubenswrapper[4981]: E0128 15:05:48.026888 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:48.526869181 +0000 UTC m=+159.979027422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.040522 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.092936 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2nm89"] Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.093916 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.105360 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2nm89"] Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.128136 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:48 crc kubenswrapper[4981]: E0128 15:05:48.128494 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:48.628479661 +0000 UTC m=+160.080637902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.216916 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:05:48 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:05:48 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:05:48 crc kubenswrapper[4981]: healthz check failed Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.217369 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.218784 4981 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.229343 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.229636 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4be2641-d6f3-4f86-ac61-e53d94db16c4-utilities\") pod \"community-operators-2nm89\" (UID: \"a4be2641-d6f3-4f86-ac61-e53d94db16c4\") " pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.229677 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdqdn\" (UniqueName: \"kubernetes.io/projected/a4be2641-d6f3-4f86-ac61-e53d94db16c4-kube-api-access-jdqdn\") pod \"community-operators-2nm89\" (UID: \"a4be2641-d6f3-4f86-ac61-e53d94db16c4\") " pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.229721 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4be2641-d6f3-4f86-ac61-e53d94db16c4-catalog-content\") pod \"community-operators-2nm89\" (UID: \"a4be2641-d6f3-4f86-ac61-e53d94db16c4\") " pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:05:48 crc kubenswrapper[4981]: E0128 15:05:48.229888 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:48.729871576 +0000 UTC m=+160.182029817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.295676 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rnbwh"] Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.300875 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" event={"ID":"8596ade7-4f4d-4f58-acb8-400366812372","Type":"ContainerStarted","Data":"833ce91e9699a1cc25846e139bd800ed6d339e9d25ce62fa69e48a80847d7c8a"} Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.300924 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" event={"ID":"8596ade7-4f4d-4f58-acb8-400366812372","Type":"ContainerStarted","Data":"e8dc9159b9c1ed8a1164d30ce676279edbf26aef1086c432910f2b5d1fd128d3"} Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.301020 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.309771 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.323384 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnbwh"] Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.331367 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4be2641-d6f3-4f86-ac61-e53d94db16c4-catalog-content\") pod \"community-operators-2nm89\" (UID: \"a4be2641-d6f3-4f86-ac61-e53d94db16c4\") " pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.331434 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.331472 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4be2641-d6f3-4f86-ac61-e53d94db16c4-utilities\") pod \"community-operators-2nm89\" (UID: \"a4be2641-d6f3-4f86-ac61-e53d94db16c4\") " pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.331506 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdqdn\" (UniqueName: \"kubernetes.io/projected/a4be2641-d6f3-4f86-ac61-e53d94db16c4-kube-api-access-jdqdn\") pod \"community-operators-2nm89\" (UID: \"a4be2641-d6f3-4f86-ac61-e53d94db16c4\") " pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:05:48 crc kubenswrapper[4981]: E0128 15:05:48.332074 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:48.832055341 +0000 UTC m=+160.284213582 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.333647 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4be2641-d6f3-4f86-ac61-e53d94db16c4-utilities\") pod \"community-operators-2nm89\" (UID: \"a4be2641-d6f3-4f86-ac61-e53d94db16c4\") " pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.336606 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4be2641-d6f3-4f86-ac61-e53d94db16c4-catalog-content\") pod \"community-operators-2nm89\" (UID: \"a4be2641-d6f3-4f86-ac61-e53d94db16c4\") " pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.375151 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdqdn\" (UniqueName: \"kubernetes.io/projected/a4be2641-d6f3-4f86-ac61-e53d94db16c4-kube-api-access-jdqdn\") pod \"community-operators-2nm89\" (UID: \"a4be2641-d6f3-4f86-ac61-e53d94db16c4\") " pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.423921 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.434917 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.435417 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv2kc\" (UniqueName: \"kubernetes.io/projected/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-kube-api-access-bv2kc\") pod \"certified-operators-rnbwh\" (UID: \"10fb71f7-ebd9-4ce7-91e8-cabe64948d68\") " pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.435495 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-utilities\") pod \"certified-operators-rnbwh\" (UID: \"10fb71f7-ebd9-4ce7-91e8-cabe64948d68\") " pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.435561 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-catalog-content\") pod \"certified-operators-rnbwh\" (UID: \"10fb71f7-ebd9-4ce7-91e8-cabe64948d68\") " pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:05:48 crc kubenswrapper[4981]: E0128 15:05:48.435690 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:48.935670501 +0000 UTC m=+160.387828742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.498385 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-t2mhx"] Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.499465 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.530906 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2mhx"] Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.537017 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.537068 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv2kc\" (UniqueName: \"kubernetes.io/projected/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-kube-api-access-bv2kc\") pod \"certified-operators-rnbwh\" (UID: \"10fb71f7-ebd9-4ce7-91e8-cabe64948d68\") " pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.537108 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-utilities\") pod \"certified-operators-rnbwh\" (UID: \"10fb71f7-ebd9-4ce7-91e8-cabe64948d68\") " pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.537156 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-catalog-content\") pod \"certified-operators-rnbwh\" (UID: \"10fb71f7-ebd9-4ce7-91e8-cabe64948d68\") " pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.537615 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-catalog-content\") pod \"certified-operators-rnbwh\" (UID: \"10fb71f7-ebd9-4ce7-91e8-cabe64948d68\") " pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:05:48 crc kubenswrapper[4981]: E0128 15:05:48.537910 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:49.037896497 +0000 UTC m=+160.490054738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.538628 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-utilities\") pod \"certified-operators-rnbwh\" (UID: \"10fb71f7-ebd9-4ce7-91e8-cabe64948d68\") " pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.578583 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv2kc\" (UniqueName: \"kubernetes.io/projected/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-kube-api-access-bv2kc\") pod \"certified-operators-rnbwh\" (UID: \"10fb71f7-ebd9-4ce7-91e8-cabe64948d68\") " pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.603330 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gspg"] Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.638113 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.638396 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4103aaf7-a794-43ec-be3d-3c72aff08400-catalog-content\") pod \"certified-operators-t2mhx\" (UID: \"4103aaf7-a794-43ec-be3d-3c72aff08400\") " pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.638438 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4103aaf7-a794-43ec-be3d-3c72aff08400-utilities\") pod \"certified-operators-t2mhx\" (UID: \"4103aaf7-a794-43ec-be3d-3c72aff08400\") " pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.638510 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dk7j\" (UniqueName: \"kubernetes.io/projected/4103aaf7-a794-43ec-be3d-3c72aff08400-kube-api-access-7dk7j\") pod \"certified-operators-t2mhx\" (UID: \"4103aaf7-a794-43ec-be3d-3c72aff08400\") " pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:05:48 crc kubenswrapper[4981]: E0128 15:05:48.638620 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:49.138598105 +0000 UTC m=+160.590756346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.644011 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.695444 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.696207 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.704649 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.704904 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.715688 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.740411 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4103aaf7-a794-43ec-be3d-3c72aff08400-utilities\") pod \"certified-operators-t2mhx\" (UID: \"4103aaf7-a794-43ec-be3d-3c72aff08400\") " pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.740839 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4103aaf7-a794-43ec-be3d-3c72aff08400-utilities\") pod \"certified-operators-t2mhx\" (UID: \"4103aaf7-a794-43ec-be3d-3c72aff08400\") " pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.741017 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.741053 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dk7j\" (UniqueName: \"kubernetes.io/projected/4103aaf7-a794-43ec-be3d-3c72aff08400-kube-api-access-7dk7j\") pod \"certified-operators-t2mhx\" (UID: \"4103aaf7-a794-43ec-be3d-3c72aff08400\") " pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.741085 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4103aaf7-a794-43ec-be3d-3c72aff08400-catalog-content\") pod \"certified-operators-t2mhx\" (UID: \"4103aaf7-a794-43ec-be3d-3c72aff08400\") " pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:05:48 crc kubenswrapper[4981]: E0128 15:05:48.741592 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:49.241564729 +0000 UTC m=+160.693722970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.742019 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4103aaf7-a794-43ec-be3d-3c72aff08400-catalog-content\") pod \"certified-operators-t2mhx\" (UID: \"4103aaf7-a794-43ec-be3d-3c72aff08400\") " pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.762422 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dk7j\" (UniqueName: \"kubernetes.io/projected/4103aaf7-a794-43ec-be3d-3c72aff08400-kube-api-access-7dk7j\") pod \"certified-operators-t2mhx\" (UID: \"4103aaf7-a794-43ec-be3d-3c72aff08400\") " pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.780643 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2nm89"] Jan 28 15:05:48 crc kubenswrapper[4981]: W0128 15:05:48.820650 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4be2641_d6f3_4f86_ac61_e53d94db16c4.slice/crio-6a596b4bd638c9d2f0873be66595bd88bc7cb2a1cdfdb72f7e28d775ff8ec510 WatchSource:0}: Error finding container 6a596b4bd638c9d2f0873be66595bd88bc7cb2a1cdfdb72f7e28d775ff8ec510: Status 404 returned error can't find the container with id 6a596b4bd638c9d2f0873be66595bd88bc7cb2a1cdfdb72f7e28d775ff8ec510 Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.842405 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.842798 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05a8c011-4ea2-4515-a8a5-b77578c6517d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"05a8c011-4ea2-4515-a8a5-b77578c6517d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.842861 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05a8c011-4ea2-4515-a8a5-b77578c6517d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"05a8c011-4ea2-4515-a8a5-b77578c6517d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:05:48 crc kubenswrapper[4981]: E0128 15:05:48.843025 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 15:05:49.343004995 +0000 UTC m=+160.795163236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.848746 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.946565 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05a8c011-4ea2-4515-a8a5-b77578c6517d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"05a8c011-4ea2-4515-a8a5-b77578c6517d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.947015 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05a8c011-4ea2-4515-a8a5-b77578c6517d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"05a8c011-4ea2-4515-a8a5-b77578c6517d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.947067 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:48 crc kubenswrapper[4981]: E0128 15:05:48.947453 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 15:05:49.447438796 +0000 UTC m=+160.899597037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vr269" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.947624 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05a8c011-4ea2-4515-a8a5-b77578c6517d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"05a8c011-4ea2-4515-a8a5-b77578c6517d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.971381 4981 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-28T15:05:48.218811141Z","Handler":null,"Name":""} Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.972163 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05a8c011-4ea2-4515-a8a5-b77578c6517d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"05a8c011-4ea2-4515-a8a5-b77578c6517d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.978804 4981 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 28 15:05:48 crc kubenswrapper[4981]: I0128 15:05:48.978838 4981 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.049497 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.056901 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.065092 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.163891 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.193200 4981 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.193246 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.243089 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:05:49 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:05:49 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:05:49 crc kubenswrapper[4981]: healthz check failed Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.243163 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.304754 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rnbwh"] Jan 28 15:05:49 crc kubenswrapper[4981]: W0128 15:05:49.313865 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10fb71f7_ebd9_4ce7_91e8_cabe64948d68.slice/crio-a786052bab671b712f3277dabdd4ff1923ad6a394320da5c912cb88045d5bbe3 WatchSource:0}: Error finding container a786052bab671b712f3277dabdd4ff1923ad6a394320da5c912cb88045d5bbe3: Status 404 returned error can't find the container with id a786052bab671b712f3277dabdd4ff1923ad6a394320da5c912cb88045d5bbe3 Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.319790 4981 generic.go:334] "Generic (PLEG): container finished" podID="06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" containerID="3e13f924a8264c00295f87280f363b1f94ad7aec189aa78a7c4447754f452cd7" exitCode=0 Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.319921 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gspg" event={"ID":"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1","Type":"ContainerDied","Data":"3e13f924a8264c00295f87280f363b1f94ad7aec189aa78a7c4447754f452cd7"} Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.319953 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gspg" event={"ID":"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1","Type":"ContainerStarted","Data":"d59b8c67a5e693e25ff6e14c66415d109a8ab5236d8401f4d0f694b9aa9eea96"} Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.352410 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.393470 4981 generic.go:334] "Generic (PLEG): container finished" podID="a4be2641-d6f3-4f86-ac61-e53d94db16c4" containerID="ae826ed6bf393ef698e0ccbf272115748ab6f73c755fbbbeee3b32e7b0c17b92" exitCode=0 Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.396767 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.397505 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nm89" event={"ID":"a4be2641-d6f3-4f86-ac61-e53d94db16c4","Type":"ContainerDied","Data":"ae826ed6bf393ef698e0ccbf272115748ab6f73c755fbbbeee3b32e7b0c17b92"} Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.397565 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nm89" event={"ID":"a4be2641-d6f3-4f86-ac61-e53d94db16c4","Type":"ContainerStarted","Data":"6a596b4bd638c9d2f0873be66595bd88bc7cb2a1cdfdb72f7e28d775ff8ec510"} Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.472236 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" event={"ID":"8596ade7-4f4d-4f58-acb8-400366812372","Type":"ContainerStarted","Data":"358d7a22078f6c46c141229fceb957989429e4da9632a077e83636354c4bbc88"} Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.541537 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qdt5k" podStartSLOduration=11.541512912 podStartE2EDuration="11.541512912s" podCreationTimestamp="2026-01-28 15:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:49.535284297 +0000 UTC m=+160.987442538" watchObservedRunningTime="2026-01-28 15:05:49.541512912 +0000 UTC m=+160.993671153" Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.549430 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-t2mhx"] Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.799888 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:49 crc kubenswrapper[4981]: I0128 15:05:49.799961 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:49.898179 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:49.898301 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.035775 4981 patch_prober.go:28] interesting pod/downloads-7954f5f757-xq5cv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.035879 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xq5cv" podUID="7a295e4c-ed2b-4d54-8b74-2901caa05143" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.036502 4981 patch_prober.go:28] interesting pod/downloads-7954f5f757-xq5cv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.036533 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xq5cv" podUID="7a295e4c-ed2b-4d54-8b74-2901caa05143" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.215582 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:05:50 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:05:50 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:05:50 crc kubenswrapper[4981]: healthz check failed Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.215838 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.378424 4981 patch_prober.go:28] interesting pod/console-f9d7485db-vc85q container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.378507 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vc85q" podUID="5c29d863-f1a8-42dc-8916-988d6d45f3d9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.388004 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.425432 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-zn7fg" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.426005 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vr269\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.435621 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.454370 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.511636 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"05a8c011-4ea2-4515-a8a5-b77578c6517d","Type":"ContainerStarted","Data":"7123e23039580b3bd7690787939e9557dc1542a030b8759ad45ca5747f7ebf78"} Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.514282 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v42ln"] Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.522176 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.539509 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.542147 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnbwh" event={"ID":"10fb71f7-ebd9-4ce7-91e8-cabe64948d68","Type":"ContainerStarted","Data":"a786052bab671b712f3277dabdd4ff1923ad6a394320da5c912cb88045d5bbe3"} Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.568570 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2mhx" event={"ID":"4103aaf7-a794-43ec-be3d-3c72aff08400","Type":"ContainerStarted","Data":"a2de7b41638d1c98858be70845c6f42912e172ff75ef70cb1f25e2637bb8eec6"} Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.584100 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v42ln"] Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.589862 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cbd927-94b5-452e-a139-8d797dd4f4f7-utilities\") pod \"redhat-marketplace-v42ln\" (UID: \"20cbd927-94b5-452e-a139-8d797dd4f4f7\") " pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.590000 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llwzs\" (UniqueName: \"kubernetes.io/projected/20cbd927-94b5-452e-a139-8d797dd4f4f7-kube-api-access-llwzs\") pod \"redhat-marketplace-v42ln\" (UID: \"20cbd927-94b5-452e-a139-8d797dd4f4f7\") " pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.590061 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cbd927-94b5-452e-a139-8d797dd4f4f7-catalog-content\") pod \"redhat-marketplace-v42ln\" (UID: \"20cbd927-94b5-452e-a139-8d797dd4f4f7\") " pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.691008 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llwzs\" (UniqueName: \"kubernetes.io/projected/20cbd927-94b5-452e-a139-8d797dd4f4f7-kube-api-access-llwzs\") pod \"redhat-marketplace-v42ln\" (UID: \"20cbd927-94b5-452e-a139-8d797dd4f4f7\") " pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.691427 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cbd927-94b5-452e-a139-8d797dd4f4f7-catalog-content\") pod \"redhat-marketplace-v42ln\" (UID: \"20cbd927-94b5-452e-a139-8d797dd4f4f7\") " pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.691480 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cbd927-94b5-452e-a139-8d797dd4f4f7-utilities\") pod \"redhat-marketplace-v42ln\" (UID: \"20cbd927-94b5-452e-a139-8d797dd4f4f7\") " pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.692020 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cbd927-94b5-452e-a139-8d797dd4f4f7-utilities\") pod \"redhat-marketplace-v42ln\" (UID: \"20cbd927-94b5-452e-a139-8d797dd4f4f7\") " pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.692261 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cbd927-94b5-452e-a139-8d797dd4f4f7-catalog-content\") pod \"redhat-marketplace-v42ln\" (UID: \"20cbd927-94b5-452e-a139-8d797dd4f4f7\") " pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.727657 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llwzs\" (UniqueName: \"kubernetes.io/projected/20cbd927-94b5-452e-a139-8d797dd4f4f7-kube-api-access-llwzs\") pod \"redhat-marketplace-v42ln\" (UID: \"20cbd927-94b5-452e-a139-8d797dd4f4f7\") " pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.833360 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.833424 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.848229 4981 patch_prober.go:28] interesting pod/apiserver-76f77b778f-bj272 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 15:05:50 crc kubenswrapper[4981]: [+]log ok Jan 28 15:05:50 crc kubenswrapper[4981]: [+]etcd ok Jan 28 15:05:50 crc kubenswrapper[4981]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 15:05:50 crc kubenswrapper[4981]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 15:05:50 crc kubenswrapper[4981]: [+]poststarthook/max-in-flight-filter ok Jan 28 15:05:50 crc kubenswrapper[4981]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 15:05:50 crc kubenswrapper[4981]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 28 15:05:50 crc kubenswrapper[4981]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 28 15:05:50 crc kubenswrapper[4981]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Jan 28 15:05:50 crc kubenswrapper[4981]: [+]poststarthook/project.openshift.io-projectcache ok Jan 28 15:05:50 crc kubenswrapper[4981]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 28 15:05:50 crc kubenswrapper[4981]: [+]poststarthook/openshift.io-startinformers ok Jan 28 15:05:50 crc kubenswrapper[4981]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 28 15:05:50 crc kubenswrapper[4981]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 15:05:50 crc kubenswrapper[4981]: livez check failed Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.848687 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-bj272" podUID="9596c86d-a5e1-4ba3-b8cb-f5095fab3ee6" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.927438 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.930108 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-57gl4"] Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.933275 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.967744 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-57gl4"] Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.996214 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-catalog-content\") pod \"redhat-marketplace-57gl4\" (UID: \"271ceb24-1e9d-44c5-a8d2-168c2b34d81a\") " pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.996446 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-utilities\") pod \"redhat-marketplace-57gl4\" (UID: \"271ceb24-1e9d-44c5-a8d2-168c2b34d81a\") " pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:05:50 crc kubenswrapper[4981]: I0128 15:05:50.996556 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5b96\" (UniqueName: \"kubernetes.io/projected/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-kube-api-access-x5b96\") pod \"redhat-marketplace-57gl4\" (UID: \"271ceb24-1e9d-44c5-a8d2-168c2b34d81a\") " pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.037914 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-rkdvv" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.097504 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-utilities\") pod \"redhat-marketplace-57gl4\" (UID: \"271ceb24-1e9d-44c5-a8d2-168c2b34d81a\") " pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.097608 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5b96\" (UniqueName: \"kubernetes.io/projected/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-kube-api-access-x5b96\") pod \"redhat-marketplace-57gl4\" (UID: \"271ceb24-1e9d-44c5-a8d2-168c2b34d81a\") " pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.097649 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-catalog-content\") pod \"redhat-marketplace-57gl4\" (UID: \"271ceb24-1e9d-44c5-a8d2-168c2b34d81a\") " pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.098087 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-catalog-content\") pod \"redhat-marketplace-57gl4\" (UID: \"271ceb24-1e9d-44c5-a8d2-168c2b34d81a\") " pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.098317 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-utilities\") pod \"redhat-marketplace-57gl4\" (UID: \"271ceb24-1e9d-44c5-a8d2-168c2b34d81a\") " pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.105901 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dqqn8"] Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.107087 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.110078 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.120570 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqqn8"] Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.127814 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5b96\" (UniqueName: \"kubernetes.io/projected/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-kube-api-access-x5b96\") pod \"redhat-marketplace-57gl4\" (UID: \"271ceb24-1e9d-44c5-a8d2-168c2b34d81a\") " pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.214363 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.218537 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:05:51 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:05:51 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:05:51 crc kubenswrapper[4981]: healthz check failed Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.218607 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.235761 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vr269"] Jan 28 15:05:51 crc kubenswrapper[4981]: W0128 15:05:51.244698 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa35bf3f_51fc_43b8_8e38_ed5c88a362f7.slice/crio-ab9c896bcd9886899ccd202be181525b7228310f05e1e26c35e71ed32d1dd11f WatchSource:0}: Error finding container ab9c896bcd9886899ccd202be181525b7228310f05e1e26c35e71ed32d1dd11f: Status 404 returned error can't find the container with id ab9c896bcd9886899ccd202be181525b7228310f05e1e26c35e71ed32d1dd11f Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.245510 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v42ln"] Jan 28 15:05:51 crc kubenswrapper[4981]: W0128 15:05:51.256054 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20cbd927_94b5_452e_a139_8d797dd4f4f7.slice/crio-3450df77b6cd1841c0c3292299d83aae1c011fdf9775f84d9691013bef6b0432 WatchSource:0}: Error finding container 3450df77b6cd1841c0c3292299d83aae1c011fdf9775f84d9691013bef6b0432: Status 404 returned error can't find the container with id 3450df77b6cd1841c0c3292299d83aae1c011fdf9775f84d9691013bef6b0432 Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.285094 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k2h52"] Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.288227 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.291285 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.299879 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2h52"] Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.300101 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82dk9\" (UniqueName: \"kubernetes.io/projected/b1101605-52b4-4c83-9958-11c0fe93d5e3-kube-api-access-82dk9\") pod \"redhat-operators-dqqn8\" (UID: \"b1101605-52b4-4c83-9958-11c0fe93d5e3\") " pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.300179 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1101605-52b4-4c83-9958-11c0fe93d5e3-utilities\") pod \"redhat-operators-dqqn8\" (UID: \"b1101605-52b4-4c83-9958-11c0fe93d5e3\") " pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.300250 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1101605-52b4-4c83-9958-11c0fe93d5e3-catalog-content\") pod \"redhat-operators-dqqn8\" (UID: \"b1101605-52b4-4c83-9958-11c0fe93d5e3\") " pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.343971 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.402676 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1101605-52b4-4c83-9958-11c0fe93d5e3-utilities\") pod \"redhat-operators-dqqn8\" (UID: \"b1101605-52b4-4c83-9958-11c0fe93d5e3\") " pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.402750 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhbvz\" (UniqueName: \"kubernetes.io/projected/0d2bf303-3b5f-4c86-bf7e-eeef12979424-kube-api-access-hhbvz\") pod \"redhat-operators-k2h52\" (UID: \"0d2bf303-3b5f-4c86-bf7e-eeef12979424\") " pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.402963 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1101605-52b4-4c83-9958-11c0fe93d5e3-catalog-content\") pod \"redhat-operators-dqqn8\" (UID: \"b1101605-52b4-4c83-9958-11c0fe93d5e3\") " pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.403466 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1101605-52b4-4c83-9958-11c0fe93d5e3-catalog-content\") pod \"redhat-operators-dqqn8\" (UID: \"b1101605-52b4-4c83-9958-11c0fe93d5e3\") " pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.403551 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1101605-52b4-4c83-9958-11c0fe93d5e3-utilities\") pod \"redhat-operators-dqqn8\" (UID: \"b1101605-52b4-4c83-9958-11c0fe93d5e3\") " pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.403332 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d2bf303-3b5f-4c86-bf7e-eeef12979424-catalog-content\") pod \"redhat-operators-k2h52\" (UID: \"0d2bf303-3b5f-4c86-bf7e-eeef12979424\") " pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.404944 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82dk9\" (UniqueName: \"kubernetes.io/projected/b1101605-52b4-4c83-9958-11c0fe93d5e3-kube-api-access-82dk9\") pod \"redhat-operators-dqqn8\" (UID: \"b1101605-52b4-4c83-9958-11c0fe93d5e3\") " pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.405020 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d2bf303-3b5f-4c86-bf7e-eeef12979424-utilities\") pod \"redhat-operators-k2h52\" (UID: \"0d2bf303-3b5f-4c86-bf7e-eeef12979424\") " pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.426960 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82dk9\" (UniqueName: \"kubernetes.io/projected/b1101605-52b4-4c83-9958-11c0fe93d5e3-kube-api-access-82dk9\") pod \"redhat-operators-dqqn8\" (UID: \"b1101605-52b4-4c83-9958-11c0fe93d5e3\") " pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.435474 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.508009 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhbvz\" (UniqueName: \"kubernetes.io/projected/0d2bf303-3b5f-4c86-bf7e-eeef12979424-kube-api-access-hhbvz\") pod \"redhat-operators-k2h52\" (UID: \"0d2bf303-3b5f-4c86-bf7e-eeef12979424\") " pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.508565 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d2bf303-3b5f-4c86-bf7e-eeef12979424-catalog-content\") pod \"redhat-operators-k2h52\" (UID: \"0d2bf303-3b5f-4c86-bf7e-eeef12979424\") " pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.508612 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d2bf303-3b5f-4c86-bf7e-eeef12979424-utilities\") pod \"redhat-operators-k2h52\" (UID: \"0d2bf303-3b5f-4c86-bf7e-eeef12979424\") " pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.509165 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d2bf303-3b5f-4c86-bf7e-eeef12979424-utilities\") pod \"redhat-operators-k2h52\" (UID: \"0d2bf303-3b5f-4c86-bf7e-eeef12979424\") " pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.510129 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d2bf303-3b5f-4c86-bf7e-eeef12979424-catalog-content\") pod \"redhat-operators-k2h52\" (UID: \"0d2bf303-3b5f-4c86-bf7e-eeef12979424\") " pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.535708 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhbvz\" (UniqueName: \"kubernetes.io/projected/0d2bf303-3b5f-4c86-bf7e-eeef12979424-kube-api-access-hhbvz\") pod \"redhat-operators-k2h52\" (UID: \"0d2bf303-3b5f-4c86-bf7e-eeef12979424\") " pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.578396 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vr269" event={"ID":"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7","Type":"ContainerStarted","Data":"e7dd630e086659897d42ca67b7ed5e04e4fb05246aa2c44aabf9c3070a51a582"} Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.578461 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vr269" event={"ID":"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7","Type":"ContainerStarted","Data":"ab9c896bcd9886899ccd202be181525b7228310f05e1e26c35e71ed32d1dd11f"} Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.578520 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.586988 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.587862 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.591986 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.592287 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.603218 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.606972 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-vr269" podStartSLOduration=137.606947142 podStartE2EDuration="2m17.606947142s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:51.605039764 +0000 UTC m=+163.057197995" watchObservedRunningTime="2026-01-28 15:05:51.606947142 +0000 UTC m=+163.059105383" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.607750 4981 generic.go:334] "Generic (PLEG): container finished" podID="10fb71f7-ebd9-4ce7-91e8-cabe64948d68" containerID="35ad42582ed39ee08835dcd416385b5186ae4fc38510834f5f11804f5802ee76" exitCode=0 Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.607936 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.608512 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnbwh" event={"ID":"10fb71f7-ebd9-4ce7-91e8-cabe64948d68","Type":"ContainerDied","Data":"35ad42582ed39ee08835dcd416385b5186ae4fc38510834f5f11804f5802ee76"} Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.624945 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5265f29-45aa-495a-9c5d-3d5b1c3459d5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f5265f29-45aa-495a-9c5d-3d5b1c3459d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.625177 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5265f29-45aa-495a-9c5d-3d5b1c3459d5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f5265f29-45aa-495a-9c5d-3d5b1c3459d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.648073 4981 generic.go:334] "Generic (PLEG): container finished" podID="20cbd927-94b5-452e-a139-8d797dd4f4f7" containerID="f60d550dfd389c6efc973d77297f6e1cf1ce9a1bcffd417d140edce3d9b363ac" exitCode=0 Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.648608 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v42ln" event={"ID":"20cbd927-94b5-452e-a139-8d797dd4f4f7","Type":"ContainerDied","Data":"f60d550dfd389c6efc973d77297f6e1cf1ce9a1bcffd417d140edce3d9b363ac"} Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.648747 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v42ln" event={"ID":"20cbd927-94b5-452e-a139-8d797dd4f4f7","Type":"ContainerStarted","Data":"3450df77b6cd1841c0c3292299d83aae1c011fdf9775f84d9691013bef6b0432"} Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.681977 4981 generic.go:334] "Generic (PLEG): container finished" podID="4103aaf7-a794-43ec-be3d-3c72aff08400" containerID="88eee054fb0e2801cb41bdb834370fce5abd38f7838a7bd69d7e71b2e7ded9fb" exitCode=0 Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.682052 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2mhx" event={"ID":"4103aaf7-a794-43ec-be3d-3c72aff08400","Type":"ContainerDied","Data":"88eee054fb0e2801cb41bdb834370fce5abd38f7838a7bd69d7e71b2e7ded9fb"} Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.686859 4981 generic.go:334] "Generic (PLEG): container finished" podID="05a8c011-4ea2-4515-a8a5-b77578c6517d" containerID="7fb5b9a5225aaf83ba8e0de591ca2d003d7a5f428334d608e204f372432d9ab6" exitCode=0 Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.686911 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"05a8c011-4ea2-4515-a8a5-b77578c6517d","Type":"ContainerDied","Data":"7fb5b9a5225aaf83ba8e0de591ca2d003d7a5f428334d608e204f372432d9ab6"} Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.730400 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5265f29-45aa-495a-9c5d-3d5b1c3459d5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f5265f29-45aa-495a-9c5d-3d5b1c3459d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.730945 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5265f29-45aa-495a-9c5d-3d5b1c3459d5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f5265f29-45aa-495a-9c5d-3d5b1c3459d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.732392 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5265f29-45aa-495a-9c5d-3d5b1c3459d5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"f5265f29-45aa-495a-9c5d-3d5b1c3459d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:05:51 crc kubenswrapper[4981]: W0128 15:05:51.736176 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod271ceb24_1e9d_44c5_a8d2_168c2b34d81a.slice/crio-014e993ce6e1240245e38e89895f09a39d71f6052643fce5b574553fe21c1c0c WatchSource:0}: Error finding container 014e993ce6e1240245e38e89895f09a39d71f6052643fce5b574553fe21c1c0c: Status 404 returned error can't find the container with id 014e993ce6e1240245e38e89895f09a39d71f6052643fce5b574553fe21c1c0c Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.753293 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5265f29-45aa-495a-9c5d-3d5b1c3459d5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"f5265f29-45aa-495a-9c5d-3d5b1c3459d5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.783253 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-57gl4"] Jan 28 15:05:51 crc kubenswrapper[4981]: I0128 15:05:51.951689 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:05:52 crc kubenswrapper[4981]: I0128 15:05:52.022429 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k2h52"] Jan 28 15:05:52 crc kubenswrapper[4981]: I0128 15:05:52.035019 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dqqn8"] Jan 28 15:05:52 crc kubenswrapper[4981]: W0128 15:05:52.074414 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1101605_52b4_4c83_9958_11c0fe93d5e3.slice/crio-ccb0c44870cdd42b6f3807dc5873bc0a99cd0486bfa0bc84ecf29c6b09da06cc WatchSource:0}: Error finding container ccb0c44870cdd42b6f3807dc5873bc0a99cd0486bfa0bc84ecf29c6b09da06cc: Status 404 returned error can't find the container with id ccb0c44870cdd42b6f3807dc5873bc0a99cd0486bfa0bc84ecf29c6b09da06cc Jan 28 15:05:52 crc kubenswrapper[4981]: W0128 15:05:52.075129 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d2bf303_3b5f_4c86_bf7e_eeef12979424.slice/crio-6185a2e0cca93b7b557080f3fe1aeb0acc71738f4c884c348890ad359c5b019b WatchSource:0}: Error finding container 6185a2e0cca93b7b557080f3fe1aeb0acc71738f4c884c348890ad359c5b019b: Status 404 returned error can't find the container with id 6185a2e0cca93b7b557080f3fe1aeb0acc71738f4c884c348890ad359c5b019b Jan 28 15:05:52 crc kubenswrapper[4981]: I0128 15:05:52.217722 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:05:52 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:05:52 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:05:52 crc kubenswrapper[4981]: healthz check failed Jan 28 15:05:52 crc kubenswrapper[4981]: I0128 15:05:52.218111 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:52 crc kubenswrapper[4981]: I0128 15:05:52.540620 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 15:05:52 crc kubenswrapper[4981]: W0128 15:05:52.556841 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf5265f29_45aa_495a_9c5d_3d5b1c3459d5.slice/crio-143ba4d2b71eb7ac7448b2733fa72308f0c3f55c07c8f95ac63f609ebee9565e WatchSource:0}: Error finding container 143ba4d2b71eb7ac7448b2733fa72308f0c3f55c07c8f95ac63f609ebee9565e: Status 404 returned error can't find the container with id 143ba4d2b71eb7ac7448b2733fa72308f0c3f55c07c8f95ac63f609ebee9565e Jan 28 15:05:52 crc kubenswrapper[4981]: I0128 15:05:52.712526 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqqn8" event={"ID":"b1101605-52b4-4c83-9958-11c0fe93d5e3","Type":"ContainerStarted","Data":"ccb0c44870cdd42b6f3807dc5873bc0a99cd0486bfa0bc84ecf29c6b09da06cc"} Jan 28 15:05:52 crc kubenswrapper[4981]: I0128 15:05:52.714741 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f5265f29-45aa-495a-9c5d-3d5b1c3459d5","Type":"ContainerStarted","Data":"143ba4d2b71eb7ac7448b2733fa72308f0c3f55c07c8f95ac63f609ebee9565e"} Jan 28 15:05:52 crc kubenswrapper[4981]: I0128 15:05:52.718643 4981 generic.go:334] "Generic (PLEG): container finished" podID="271ceb24-1e9d-44c5-a8d2-168c2b34d81a" containerID="79b62f4589aba8d2615592c6ddfc12dd44f4bc66b217fbb4d212050bebf69f5c" exitCode=0 Jan 28 15:05:52 crc kubenswrapper[4981]: I0128 15:05:52.718751 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57gl4" event={"ID":"271ceb24-1e9d-44c5-a8d2-168c2b34d81a","Type":"ContainerDied","Data":"79b62f4589aba8d2615592c6ddfc12dd44f4bc66b217fbb4d212050bebf69f5c"} Jan 28 15:05:52 crc kubenswrapper[4981]: I0128 15:05:52.718845 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57gl4" event={"ID":"271ceb24-1e9d-44c5-a8d2-168c2b34d81a","Type":"ContainerStarted","Data":"014e993ce6e1240245e38e89895f09a39d71f6052643fce5b574553fe21c1c0c"} Jan 28 15:05:52 crc kubenswrapper[4981]: I0128 15:05:52.720772 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2h52" event={"ID":"0d2bf303-3b5f-4c86-bf7e-eeef12979424","Type":"ContainerStarted","Data":"970eb8e61131d4733c0fd13fc92f2ea1bb5da88db72a6e9756d872c5764bbb42"} Jan 28 15:05:52 crc kubenswrapper[4981]: I0128 15:05:52.720832 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2h52" event={"ID":"0d2bf303-3b5f-4c86-bf7e-eeef12979424","Type":"ContainerStarted","Data":"6185a2e0cca93b7b557080f3fe1aeb0acc71738f4c884c348890ad359c5b019b"} Jan 28 15:05:52 crc kubenswrapper[4981]: I0128 15:05:52.971073 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.058631 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05a8c011-4ea2-4515-a8a5-b77578c6517d-kube-api-access\") pod \"05a8c011-4ea2-4515-a8a5-b77578c6517d\" (UID: \"05a8c011-4ea2-4515-a8a5-b77578c6517d\") " Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.058729 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05a8c011-4ea2-4515-a8a5-b77578c6517d-kubelet-dir\") pod \"05a8c011-4ea2-4515-a8a5-b77578c6517d\" (UID: \"05a8c011-4ea2-4515-a8a5-b77578c6517d\") " Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.058853 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05a8c011-4ea2-4515-a8a5-b77578c6517d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "05a8c011-4ea2-4515-a8a5-b77578c6517d" (UID: "05a8c011-4ea2-4515-a8a5-b77578c6517d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.059120 4981 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/05a8c011-4ea2-4515-a8a5-b77578c6517d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.067398 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05a8c011-4ea2-4515-a8a5-b77578c6517d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "05a8c011-4ea2-4515-a8a5-b77578c6517d" (UID: "05a8c011-4ea2-4515-a8a5-b77578c6517d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.160399 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/05a8c011-4ea2-4515-a8a5-b77578c6517d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.219116 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:05:53 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:05:53 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:05:53 crc kubenswrapper[4981]: healthz check failed Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.219234 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.731102 4981 generic.go:334] "Generic (PLEG): container finished" podID="ed4c6fed-3e17-40a9-b844-adc144028848" containerID="f997bcef97eadacc863d035087f4cbcb25c94fc21b494676a224659f1514b8a5" exitCode=0 Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.731179 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" event={"ID":"ed4c6fed-3e17-40a9-b844-adc144028848","Type":"ContainerDied","Data":"f997bcef97eadacc863d035087f4cbcb25c94fc21b494676a224659f1514b8a5"} Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.734883 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"05a8c011-4ea2-4515-a8a5-b77578c6517d","Type":"ContainerDied","Data":"7123e23039580b3bd7690787939e9557dc1542a030b8759ad45ca5747f7ebf78"} Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.734946 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7123e23039580b3bd7690787939e9557dc1542a030b8759ad45ca5747f7ebf78" Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.735039 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.741158 4981 generic.go:334] "Generic (PLEG): container finished" podID="b1101605-52b4-4c83-9958-11c0fe93d5e3" containerID="6c1abee4818e6b86d573f42e34dd555c3aefe9ac6237f324c65349c1a86f7cc0" exitCode=0 Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.741300 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqqn8" event={"ID":"b1101605-52b4-4c83-9958-11c0fe93d5e3","Type":"ContainerDied","Data":"6c1abee4818e6b86d573f42e34dd555c3aefe9ac6237f324c65349c1a86f7cc0"} Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.749883 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f5265f29-45aa-495a-9c5d-3d5b1c3459d5","Type":"ContainerStarted","Data":"3b641dd5641e4f184eb4a6a9aa633b35f65e404af740e63df3be3ba12381433a"} Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.756220 4981 generic.go:334] "Generic (PLEG): container finished" podID="0d2bf303-3b5f-4c86-bf7e-eeef12979424" containerID="970eb8e61131d4733c0fd13fc92f2ea1bb5da88db72a6e9756d872c5764bbb42" exitCode=0 Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.756512 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2h52" event={"ID":"0d2bf303-3b5f-4c86-bf7e-eeef12979424","Type":"ContainerDied","Data":"970eb8e61131d4733c0fd13fc92f2ea1bb5da88db72a6e9756d872c5764bbb42"} Jan 28 15:05:53 crc kubenswrapper[4981]: I0128 15:05:53.843603 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.843581794 podStartE2EDuration="2.843581794s" podCreationTimestamp="2026-01-28 15:05:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:05:53.841744698 +0000 UTC m=+165.293902949" watchObservedRunningTime="2026-01-28 15:05:53.843581794 +0000 UTC m=+165.295740035" Jan 28 15:05:54 crc kubenswrapper[4981]: I0128 15:05:54.218805 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:05:54 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:05:54 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:05:54 crc kubenswrapper[4981]: healthz check failed Jan 28 15:05:54 crc kubenswrapper[4981]: I0128 15:05:54.218904 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:54 crc kubenswrapper[4981]: I0128 15:05:54.764022 4981 generic.go:334] "Generic (PLEG): container finished" podID="f5265f29-45aa-495a-9c5d-3d5b1c3459d5" containerID="3b641dd5641e4f184eb4a6a9aa633b35f65e404af740e63df3be3ba12381433a" exitCode=0 Jan 28 15:05:54 crc kubenswrapper[4981]: I0128 15:05:54.764108 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f5265f29-45aa-495a-9c5d-3d5b1c3459d5","Type":"ContainerDied","Data":"3b641dd5641e4f184eb4a6a9aa633b35f65e404af740e63df3be3ba12381433a"} Jan 28 15:05:55 crc kubenswrapper[4981]: I0128 15:05:55.216460 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:05:55 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:05:55 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:05:55 crc kubenswrapper[4981]: healthz check failed Jan 28 15:05:55 crc kubenswrapper[4981]: I0128 15:05:55.216975 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:55 crc kubenswrapper[4981]: I0128 15:05:55.839205 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:55 crc kubenswrapper[4981]: I0128 15:05:55.844141 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-bj272" Jan 28 15:05:56 crc kubenswrapper[4981]: I0128 15:05:56.214731 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:05:56 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:05:56 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:05:56 crc kubenswrapper[4981]: healthz check failed Jan 28 15:05:56 crc kubenswrapper[4981]: I0128 15:05:56.214988 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:56 crc kubenswrapper[4981]: I0128 15:05:56.352960 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-74zjr" Jan 28 15:05:57 crc kubenswrapper[4981]: I0128 15:05:57.214122 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:05:57 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:05:57 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:05:57 crc kubenswrapper[4981]: healthz check failed Jan 28 15:05:57 crc kubenswrapper[4981]: I0128 15:05:57.214226 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:57 crc kubenswrapper[4981]: I0128 15:05:57.758939 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs\") pod \"network-metrics-daemon-8rsts\" (UID: \"d5fda60c-a87b-4810-81df-4c7717d34ac1\") " pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:57 crc kubenswrapper[4981]: I0128 15:05:57.766290 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d5fda60c-a87b-4810-81df-4c7717d34ac1-metrics-certs\") pod \"network-metrics-daemon-8rsts\" (UID: \"d5fda60c-a87b-4810-81df-4c7717d34ac1\") " pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:57 crc kubenswrapper[4981]: I0128 15:05:57.936231 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-8rsts" Jan 28 15:05:58 crc kubenswrapper[4981]: I0128 15:05:58.214802 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:05:58 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:05:58 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:05:58 crc kubenswrapper[4981]: healthz check failed Jan 28 15:05:58 crc kubenswrapper[4981]: I0128 15:05:58.215288 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:59 crc kubenswrapper[4981]: I0128 15:05:59.214533 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:05:59 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:05:59 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:05:59 crc kubenswrapper[4981]: healthz check failed Jan 28 15:05:59 crc kubenswrapper[4981]: I0128 15:05:59.214619 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:05:59 crc kubenswrapper[4981]: I0128 15:05:59.800804 4981 patch_prober.go:28] interesting pod/console-f9d7485db-vc85q container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 28 15:05:59 crc kubenswrapper[4981]: I0128 15:05:59.800920 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-vc85q" podUID="5c29d863-f1a8-42dc-8916-988d6d45f3d9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 28 15:06:00 crc kubenswrapper[4981]: I0128 15:06:00.036456 4981 patch_prober.go:28] interesting pod/downloads-7954f5f757-xq5cv container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 28 15:06:00 crc kubenswrapper[4981]: I0128 15:06:00.036541 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xq5cv" podUID="7a295e4c-ed2b-4d54-8b74-2901caa05143" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 28 15:06:00 crc kubenswrapper[4981]: I0128 15:06:00.037087 4981 patch_prober.go:28] interesting pod/downloads-7954f5f757-xq5cv container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" start-of-body= Jan 28 15:06:00 crc kubenswrapper[4981]: I0128 15:06:00.037281 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xq5cv" podUID="7a295e4c-ed2b-4d54-8b74-2901caa05143" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.7:8080/\": dial tcp 10.217.0.7:8080: connect: connection refused" Jan 28 15:06:00 crc kubenswrapper[4981]: I0128 15:06:00.214290 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:06:00 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:06:00 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:06:00 crc kubenswrapper[4981]: healthz check failed Jan 28 15:06:00 crc kubenswrapper[4981]: I0128 15:06:00.214361 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:06:01 crc kubenswrapper[4981]: I0128 15:06:01.215738 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:06:01 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:06:01 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:06:01 crc kubenswrapper[4981]: healthz check failed Jan 28 15:06:01 crc kubenswrapper[4981]: I0128 15:06:01.216810 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.057585 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.062685 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.137113 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5265f29-45aa-495a-9c5d-3d5b1c3459d5-kubelet-dir\") pod \"f5265f29-45aa-495a-9c5d-3d5b1c3459d5\" (UID: \"f5265f29-45aa-495a-9c5d-3d5b1c3459d5\") " Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.137224 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mcxk\" (UniqueName: \"kubernetes.io/projected/ed4c6fed-3e17-40a9-b844-adc144028848-kube-api-access-9mcxk\") pod \"ed4c6fed-3e17-40a9-b844-adc144028848\" (UID: \"ed4c6fed-3e17-40a9-b844-adc144028848\") " Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.137284 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed4c6fed-3e17-40a9-b844-adc144028848-config-volume\") pod \"ed4c6fed-3e17-40a9-b844-adc144028848\" (UID: \"ed4c6fed-3e17-40a9-b844-adc144028848\") " Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.137314 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5265f29-45aa-495a-9c5d-3d5b1c3459d5-kube-api-access\") pod \"f5265f29-45aa-495a-9c5d-3d5b1c3459d5\" (UID: \"f5265f29-45aa-495a-9c5d-3d5b1c3459d5\") " Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.137345 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed4c6fed-3e17-40a9-b844-adc144028848-secret-volume\") pod \"ed4c6fed-3e17-40a9-b844-adc144028848\" (UID: \"ed4c6fed-3e17-40a9-b844-adc144028848\") " Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.138218 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5265f29-45aa-495a-9c5d-3d5b1c3459d5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f5265f29-45aa-495a-9c5d-3d5b1c3459d5" (UID: "f5265f29-45aa-495a-9c5d-3d5b1c3459d5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.139269 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed4c6fed-3e17-40a9-b844-adc144028848-config-volume" (OuterVolumeSpecName: "config-volume") pod "ed4c6fed-3e17-40a9-b844-adc144028848" (UID: "ed4c6fed-3e17-40a9-b844-adc144028848"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.157002 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed4c6fed-3e17-40a9-b844-adc144028848-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ed4c6fed-3e17-40a9-b844-adc144028848" (UID: "ed4c6fed-3e17-40a9-b844-adc144028848"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.159275 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5265f29-45aa-495a-9c5d-3d5b1c3459d5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f5265f29-45aa-495a-9c5d-3d5b1c3459d5" (UID: "f5265f29-45aa-495a-9c5d-3d5b1c3459d5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.159546 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed4c6fed-3e17-40a9-b844-adc144028848-kube-api-access-9mcxk" (OuterVolumeSpecName: "kube-api-access-9mcxk") pod "ed4c6fed-3e17-40a9-b844-adc144028848" (UID: "ed4c6fed-3e17-40a9-b844-adc144028848"). InnerVolumeSpecName "kube-api-access-9mcxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.216068 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:06:02 crc kubenswrapper[4981]: [-]has-synced failed: reason withheld Jan 28 15:06:02 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:06:02 crc kubenswrapper[4981]: healthz check failed Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.216139 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.238371 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mcxk\" (UniqueName: \"kubernetes.io/projected/ed4c6fed-3e17-40a9-b844-adc144028848-kube-api-access-9mcxk\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.238410 4981 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed4c6fed-3e17-40a9-b844-adc144028848-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.238421 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f5265f29-45aa-495a-9c5d-3d5b1c3459d5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.238429 4981 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ed4c6fed-3e17-40a9-b844-adc144028848-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.238438 4981 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f5265f29-45aa-495a-9c5d-3d5b1c3459d5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.435531 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-8rsts"] Jan 28 15:06:02 crc kubenswrapper[4981]: W0128 15:06:02.439011 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5fda60c_a87b_4810_81df_4c7717d34ac1.slice/crio-2903772dedf2071b2d828fc5eb0cdd9426007665c905d69f11495158e14a245a WatchSource:0}: Error finding container 2903772dedf2071b2d828fc5eb0cdd9426007665c905d69f11495158e14a245a: Status 404 returned error can't find the container with id 2903772dedf2071b2d828fc5eb0cdd9426007665c905d69f11495158e14a245a Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.841905 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8rsts" event={"ID":"d5fda60c-a87b-4810-81df-4c7717d34ac1","Type":"ContainerStarted","Data":"2903772dedf2071b2d828fc5eb0cdd9426007665c905d69f11495158e14a245a"} Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.844289 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"f5265f29-45aa-495a-9c5d-3d5b1c3459d5","Type":"ContainerDied","Data":"143ba4d2b71eb7ac7448b2733fa72308f0c3f55c07c8f95ac63f609ebee9565e"} Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.844354 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.844365 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="143ba4d2b71eb7ac7448b2733fa72308f0c3f55c07c8f95ac63f609ebee9565e" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.845930 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" event={"ID":"ed4c6fed-3e17-40a9-b844-adc144028848","Type":"ContainerDied","Data":"62952e8ef228483191c5ebc3a7c70d51fb5e1b25cadbfa158f091104441620e3"} Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.845964 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62952e8ef228483191c5ebc3a7c70d51fb5e1b25cadbfa158f091104441620e3" Jan 28 15:06:02 crc kubenswrapper[4981]: I0128 15:06:02.846050 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf" Jan 28 15:06:03 crc kubenswrapper[4981]: I0128 15:06:03.213826 4981 patch_prober.go:28] interesting pod/router-default-5444994796-bnsn8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 15:06:03 crc kubenswrapper[4981]: [+]has-synced ok Jan 28 15:06:03 crc kubenswrapper[4981]: [+]process-running ok Jan 28 15:06:03 crc kubenswrapper[4981]: healthz check failed Jan 28 15:06:03 crc kubenswrapper[4981]: I0128 15:06:03.214348 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-bnsn8" podUID="d9879f20-7ec3-46f5-b58c-6f49e431d23f" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 15:06:03 crc kubenswrapper[4981]: I0128 15:06:03.855820 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8rsts" event={"ID":"d5fda60c-a87b-4810-81df-4c7717d34ac1","Type":"ContainerStarted","Data":"3de072e97ac9498011cdbb9aa55d5d98f1ed06641b510af118936bf0d5505c46"} Jan 28 15:06:04 crc kubenswrapper[4981]: I0128 15:06:04.217930 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:06:04 crc kubenswrapper[4981]: I0128 15:06:04.219881 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-bnsn8" Jan 28 15:06:10 crc kubenswrapper[4981]: I0128 15:06:10.050247 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-xq5cv" Jan 28 15:06:10 crc kubenswrapper[4981]: I0128 15:06:10.062573 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:06:10 crc kubenswrapper[4981]: I0128 15:06:10.072676 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:06:10 crc kubenswrapper[4981]: I0128 15:06:10.463083 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:06:15 crc kubenswrapper[4981]: E0128 15:06:15.084058 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 15:06:15 crc kubenswrapper[4981]: E0128 15:06:15.085475 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-46mxf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9gspg_openshift-marketplace(06d03aa9-a3ff-46c7-bafd-4666c5adf6c1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:06:15 crc kubenswrapper[4981]: E0128 15:06:15.086755 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9gspg" podUID="06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" Jan 28 15:06:15 crc kubenswrapper[4981]: E0128 15:06:15.522373 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 15:06:15 crc kubenswrapper[4981]: E0128 15:06:15.522635 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jdqdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-2nm89_openshift-marketplace(a4be2641-d6f3-4f86-ac61-e53d94db16c4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:06:15 crc kubenswrapper[4981]: E0128 15:06:15.523839 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-2nm89" podUID="a4be2641-d6f3-4f86-ac61-e53d94db16c4" Jan 28 15:06:17 crc kubenswrapper[4981]: I0128 15:06:17.798978 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 15:06:19 crc kubenswrapper[4981]: I0128 15:06:19.898161 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:06:19 crc kubenswrapper[4981]: I0128 15:06:19.898606 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:06:20 crc kubenswrapper[4981]: E0128 15:06:20.703665 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-2nm89" podUID="a4be2641-d6f3-4f86-ac61-e53d94db16c4" Jan 28 15:06:20 crc kubenswrapper[4981]: E0128 15:06:20.704181 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9gspg" podUID="06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" Jan 28 15:06:21 crc kubenswrapper[4981]: I0128 15:06:21.333735 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-k7sgz" Jan 28 15:06:21 crc kubenswrapper[4981]: E0128 15:06:21.335669 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 15:06:21 crc kubenswrapper[4981]: E0128 15:06:21.335816 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bv2kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rnbwh_openshift-marketplace(10fb71f7-ebd9-4ce7-91e8-cabe64948d68): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:06:21 crc kubenswrapper[4981]: E0128 15:06:21.337422 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rnbwh" podUID="10fb71f7-ebd9-4ce7-91e8-cabe64948d68" Jan 28 15:06:23 crc kubenswrapper[4981]: E0128 15:06:23.065991 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage1918790969/1\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 15:06:23 crc kubenswrapper[4981]: E0128 15:06:23.066345 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rnbwh" podUID="10fb71f7-ebd9-4ce7-91e8-cabe64948d68" Jan 28 15:06:23 crc kubenswrapper[4981]: E0128 15:06:23.066650 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhbvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-k2h52_openshift-marketplace(0d2bf303-3b5f-4c86-bf7e-eeef12979424): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage1918790969/1\": happened during read: context canceled" logger="UnhandledError" Jan 28 15:06:23 crc kubenswrapper[4981]: E0128 15:06:23.068035 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage1918790969/1\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-operators-k2h52" podUID="0d2bf303-3b5f-4c86-bf7e-eeef12979424" Jan 28 15:06:23 crc kubenswrapper[4981]: E0128 15:06:23.182331 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 15:06:23 crc kubenswrapper[4981]: E0128 15:06:23.182540 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dk7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-t2mhx_openshift-marketplace(4103aaf7-a794-43ec-be3d-3c72aff08400): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:06:23 crc kubenswrapper[4981]: E0128 15:06:23.184556 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-t2mhx" podUID="4103aaf7-a794-43ec-be3d-3c72aff08400" Jan 28 15:06:23 crc kubenswrapper[4981]: E0128 15:06:23.997640 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-k2h52" podUID="0d2bf303-3b5f-4c86-bf7e-eeef12979424" Jan 28 15:06:24 crc kubenswrapper[4981]: E0128 15:06:23.999540 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-t2mhx" podUID="4103aaf7-a794-43ec-be3d-3c72aff08400" Jan 28 15:06:24 crc kubenswrapper[4981]: E0128 15:06:24.266749 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 15:06:24 crc kubenswrapper[4981]: E0128 15:06:24.267008 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llwzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-v42ln_openshift-marketplace(20cbd927-94b5-452e-a139-8d797dd4f4f7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:06:24 crc kubenswrapper[4981]: E0128 15:06:24.268511 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-v42ln" podUID="20cbd927-94b5-452e-a139-8d797dd4f4f7" Jan 28 15:06:25 crc kubenswrapper[4981]: I0128 15:06:25.001137 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-8rsts" event={"ID":"d5fda60c-a87b-4810-81df-4c7717d34ac1","Type":"ContainerStarted","Data":"be32be4409930304749e8d3825db73601668e3e307b7b58fc1ac74326a4e6aa4"} Jan 28 15:06:25 crc kubenswrapper[4981]: E0128 15:06:25.005242 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v42ln" podUID="20cbd927-94b5-452e-a139-8d797dd4f4f7" Jan 28 15:06:25 crc kubenswrapper[4981]: I0128 15:06:25.017531 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-8rsts" podStartSLOduration=171.017507184 podStartE2EDuration="2m51.017507184s" podCreationTimestamp="2026-01-28 15:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:06:25.016509841 +0000 UTC m=+196.468668122" watchObservedRunningTime="2026-01-28 15:06:25.017507184 +0000 UTC m=+196.469665435" Jan 28 15:06:25 crc kubenswrapper[4981]: E0128 15:06:25.400620 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:a6a43e8619a6aee86e08a0cfbb4efa649d7a8392f5b08ff9f5eadfdd7fab8671: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:a6a43e8619a6aee86e08a0cfbb4efa649d7a8392f5b08ff9f5eadfdd7fab8671\": context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 15:06:25 crc kubenswrapper[4981]: E0128 15:06:25.400991 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82dk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dqqn8_openshift-marketplace(b1101605-52b4-4c83-9958-11c0fe93d5e3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:a6a43e8619a6aee86e08a0cfbb4efa649d7a8392f5b08ff9f5eadfdd7fab8671: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:a6a43e8619a6aee86e08a0cfbb4efa649d7a8392f5b08ff9f5eadfdd7fab8671\": context canceled" logger="UnhandledError" Jan 28 15:06:25 crc kubenswrapper[4981]: E0128 15:06:25.402498 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:a6a43e8619a6aee86e08a0cfbb4efa649d7a8392f5b08ff9f5eadfdd7fab8671: Get \\\"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:a6a43e8619a6aee86e08a0cfbb4efa649d7a8392f5b08ff9f5eadfdd7fab8671\\\": context canceled\"" pod="openshift-marketplace/redhat-operators-dqqn8" podUID="b1101605-52b4-4c83-9958-11c0fe93d5e3" Jan 28 15:06:25 crc kubenswrapper[4981]: E0128 15:06:25.420668 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 15:06:25 crc kubenswrapper[4981]: E0128 15:06:25.421003 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x5b96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-57gl4_openshift-marketplace(271ceb24-1e9d-44c5-a8d2-168c2b34d81a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:06:25 crc kubenswrapper[4981]: E0128 15:06:25.422347 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-57gl4" podUID="271ceb24-1e9d-44c5-a8d2-168c2b34d81a" Jan 28 15:06:26 crc kubenswrapper[4981]: E0128 15:06:26.010489 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dqqn8" podUID="b1101605-52b4-4c83-9958-11c0fe93d5e3" Jan 28 15:06:27 crc kubenswrapper[4981]: I0128 15:06:27.984203 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:06:27 crc kubenswrapper[4981]: E0128 15:06:27.984901 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05a8c011-4ea2-4515-a8a5-b77578c6517d" containerName="pruner" Jan 28 15:06:27 crc kubenswrapper[4981]: I0128 15:06:27.984917 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="05a8c011-4ea2-4515-a8a5-b77578c6517d" containerName="pruner" Jan 28 15:06:27 crc kubenswrapper[4981]: E0128 15:06:27.984937 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5265f29-45aa-495a-9c5d-3d5b1c3459d5" containerName="pruner" Jan 28 15:06:27 crc kubenswrapper[4981]: I0128 15:06:27.984943 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5265f29-45aa-495a-9c5d-3d5b1c3459d5" containerName="pruner" Jan 28 15:06:27 crc kubenswrapper[4981]: E0128 15:06:27.984959 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed4c6fed-3e17-40a9-b844-adc144028848" containerName="collect-profiles" Jan 28 15:06:27 crc kubenswrapper[4981]: I0128 15:06:27.984967 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4c6fed-3e17-40a9-b844-adc144028848" containerName="collect-profiles" Jan 28 15:06:27 crc kubenswrapper[4981]: I0128 15:06:27.985083 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed4c6fed-3e17-40a9-b844-adc144028848" containerName="collect-profiles" Jan 28 15:06:27 crc kubenswrapper[4981]: I0128 15:06:27.985093 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5265f29-45aa-495a-9c5d-3d5b1c3459d5" containerName="pruner" Jan 28 15:06:27 crc kubenswrapper[4981]: I0128 15:06:27.985108 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="05a8c011-4ea2-4515-a8a5-b77578c6517d" containerName="pruner" Jan 28 15:06:27 crc kubenswrapper[4981]: I0128 15:06:27.985583 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:06:27 crc kubenswrapper[4981]: I0128 15:06:27.995554 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 15:06:27 crc kubenswrapper[4981]: I0128 15:06:27.995559 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:06:27 crc kubenswrapper[4981]: I0128 15:06:27.995617 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 15:06:28 crc kubenswrapper[4981]: I0128 15:06:28.090803 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0049863-22d6-48ff-9d32-77be1504bb89-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f0049863-22d6-48ff-9d32-77be1504bb89\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:06:28 crc kubenswrapper[4981]: I0128 15:06:28.090906 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0049863-22d6-48ff-9d32-77be1504bb89-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f0049863-22d6-48ff-9d32-77be1504bb89\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:06:28 crc kubenswrapper[4981]: I0128 15:06:28.191871 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0049863-22d6-48ff-9d32-77be1504bb89-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f0049863-22d6-48ff-9d32-77be1504bb89\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:06:28 crc kubenswrapper[4981]: I0128 15:06:28.192767 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0049863-22d6-48ff-9d32-77be1504bb89-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f0049863-22d6-48ff-9d32-77be1504bb89\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:06:28 crc kubenswrapper[4981]: I0128 15:06:28.192929 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0049863-22d6-48ff-9d32-77be1504bb89-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f0049863-22d6-48ff-9d32-77be1504bb89\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:06:28 crc kubenswrapper[4981]: I0128 15:06:28.217259 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0049863-22d6-48ff-9d32-77be1504bb89-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f0049863-22d6-48ff-9d32-77be1504bb89\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:06:28 crc kubenswrapper[4981]: I0128 15:06:28.320871 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:06:28 crc kubenswrapper[4981]: I0128 15:06:28.741075 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 15:06:28 crc kubenswrapper[4981]: W0128 15:06:28.749949 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf0049863_22d6_48ff_9d32_77be1504bb89.slice/crio-edeecae7b625cae74f86dcdbae0679735a4cd05110705b43994019e51983485e WatchSource:0}: Error finding container edeecae7b625cae74f86dcdbae0679735a4cd05110705b43994019e51983485e: Status 404 returned error can't find the container with id edeecae7b625cae74f86dcdbae0679735a4cd05110705b43994019e51983485e Jan 28 15:06:29 crc kubenswrapper[4981]: I0128 15:06:29.028004 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f0049863-22d6-48ff-9d32-77be1504bb89","Type":"ContainerStarted","Data":"edeecae7b625cae74f86dcdbae0679735a4cd05110705b43994019e51983485e"} Jan 28 15:06:30 crc kubenswrapper[4981]: I0128 15:06:30.034179 4981 generic.go:334] "Generic (PLEG): container finished" podID="f0049863-22d6-48ff-9d32-77be1504bb89" containerID="c9335a7c0123c6806b5edb6925e3e4aaae77b10a121e3700abff3373b887847b" exitCode=0 Jan 28 15:06:30 crc kubenswrapper[4981]: I0128 15:06:30.034344 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f0049863-22d6-48ff-9d32-77be1504bb89","Type":"ContainerDied","Data":"c9335a7c0123c6806b5edb6925e3e4aaae77b10a121e3700abff3373b887847b"} Jan 28 15:06:31 crc kubenswrapper[4981]: I0128 15:06:31.461855 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:06:31 crc kubenswrapper[4981]: I0128 15:06:31.642088 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0049863-22d6-48ff-9d32-77be1504bb89-kube-api-access\") pod \"f0049863-22d6-48ff-9d32-77be1504bb89\" (UID: \"f0049863-22d6-48ff-9d32-77be1504bb89\") " Jan 28 15:06:31 crc kubenswrapper[4981]: I0128 15:06:31.642288 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0049863-22d6-48ff-9d32-77be1504bb89-kubelet-dir\") pod \"f0049863-22d6-48ff-9d32-77be1504bb89\" (UID: \"f0049863-22d6-48ff-9d32-77be1504bb89\") " Jan 28 15:06:31 crc kubenswrapper[4981]: I0128 15:06:31.642890 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0049863-22d6-48ff-9d32-77be1504bb89-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f0049863-22d6-48ff-9d32-77be1504bb89" (UID: "f0049863-22d6-48ff-9d32-77be1504bb89"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:06:31 crc kubenswrapper[4981]: I0128 15:06:31.648097 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0049863-22d6-48ff-9d32-77be1504bb89-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f0049863-22d6-48ff-9d32-77be1504bb89" (UID: "f0049863-22d6-48ff-9d32-77be1504bb89"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:06:31 crc kubenswrapper[4981]: I0128 15:06:31.744106 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f0049863-22d6-48ff-9d32-77be1504bb89-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:31 crc kubenswrapper[4981]: I0128 15:06:31.744135 4981 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0049863-22d6-48ff-9d32-77be1504bb89-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:32 crc kubenswrapper[4981]: I0128 15:06:32.053904 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"f0049863-22d6-48ff-9d32-77be1504bb89","Type":"ContainerDied","Data":"edeecae7b625cae74f86dcdbae0679735a4cd05110705b43994019e51983485e"} Jan 28 15:06:32 crc kubenswrapper[4981]: I0128 15:06:32.054272 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edeecae7b625cae74f86dcdbae0679735a4cd05110705b43994019e51983485e" Jan 28 15:06:32 crc kubenswrapper[4981]: I0128 15:06:32.054023 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.068551 4981 generic.go:334] "Generic (PLEG): container finished" podID="06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" containerID="fd2272b6334162cdbb08c76ff2b8b14ffeaa7a1fdb6d2a5262e8be7a3666e442" exitCode=0 Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.068665 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gspg" event={"ID":"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1","Type":"ContainerDied","Data":"fd2272b6334162cdbb08c76ff2b8b14ffeaa7a1fdb6d2a5262e8be7a3666e442"} Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.782015 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:06:34 crc kubenswrapper[4981]: E0128 15:06:34.782623 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0049863-22d6-48ff-9d32-77be1504bb89" containerName="pruner" Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.782636 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0049863-22d6-48ff-9d32-77be1504bb89" containerName="pruner" Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.782745 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0049863-22d6-48ff-9d32-77be1504bb89" containerName="pruner" Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.783129 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.786176 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.790783 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.794597 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.889737 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e152bac2-8343-44cd-8df7-659fc89ad725-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e152bac2-8343-44cd-8df7-659fc89ad725\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.889793 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e152bac2-8343-44cd-8df7-659fc89ad725-var-lock\") pod \"installer-9-crc\" (UID: \"e152bac2-8343-44cd-8df7-659fc89ad725\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.890014 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e152bac2-8343-44cd-8df7-659fc89ad725-kube-api-access\") pod \"installer-9-crc\" (UID: \"e152bac2-8343-44cd-8df7-659fc89ad725\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.991846 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e152bac2-8343-44cd-8df7-659fc89ad725-var-lock\") pod \"installer-9-crc\" (UID: \"e152bac2-8343-44cd-8df7-659fc89ad725\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.991930 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e152bac2-8343-44cd-8df7-659fc89ad725-kube-api-access\") pod \"installer-9-crc\" (UID: \"e152bac2-8343-44cd-8df7-659fc89ad725\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.992016 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e152bac2-8343-44cd-8df7-659fc89ad725-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e152bac2-8343-44cd-8df7-659fc89ad725\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.992015 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e152bac2-8343-44cd-8df7-659fc89ad725-var-lock\") pod \"installer-9-crc\" (UID: \"e152bac2-8343-44cd-8df7-659fc89ad725\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:06:34 crc kubenswrapper[4981]: I0128 15:06:34.992158 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e152bac2-8343-44cd-8df7-659fc89ad725-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e152bac2-8343-44cd-8df7-659fc89ad725\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:06:35 crc kubenswrapper[4981]: I0128 15:06:35.018404 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e152bac2-8343-44cd-8df7-659fc89ad725-kube-api-access\") pod \"installer-9-crc\" (UID: \"e152bac2-8343-44cd-8df7-659fc89ad725\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:06:35 crc kubenswrapper[4981]: I0128 15:06:35.079294 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gspg" event={"ID":"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1","Type":"ContainerStarted","Data":"35a94a3b5c466b0a495713441cfa24da7be3a0af42454266ffcc92f01c1b11d8"} Jan 28 15:06:35 crc kubenswrapper[4981]: I0128 15:06:35.098970 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:06:35 crc kubenswrapper[4981]: I0128 15:06:35.352742 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9gspg" podStartSLOduration=3.116326913 podStartE2EDuration="48.352716925s" podCreationTimestamp="2026-01-28 15:05:47 +0000 UTC" firstStartedPulling="2026-01-28 15:05:49.352096775 +0000 UTC m=+160.804255016" lastFinishedPulling="2026-01-28 15:06:34.588486787 +0000 UTC m=+206.040645028" observedRunningTime="2026-01-28 15:06:35.099172671 +0000 UTC m=+206.551330912" watchObservedRunningTime="2026-01-28 15:06:35.352716925 +0000 UTC m=+206.804875166" Jan 28 15:06:35 crc kubenswrapper[4981]: I0128 15:06:35.529814 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 15:06:35 crc kubenswrapper[4981]: W0128 15:06:35.539510 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode152bac2_8343_44cd_8df7_659fc89ad725.slice/crio-81c8b981c92a2a832f967f5545a128c99cc7c2754e123af3879e176e732181e0 WatchSource:0}: Error finding container 81c8b981c92a2a832f967f5545a128c99cc7c2754e123af3879e176e732181e0: Status 404 returned error can't find the container with id 81c8b981c92a2a832f967f5545a128c99cc7c2754e123af3879e176e732181e0 Jan 28 15:06:36 crc kubenswrapper[4981]: I0128 15:06:36.087547 4981 generic.go:334] "Generic (PLEG): container finished" podID="10fb71f7-ebd9-4ce7-91e8-cabe64948d68" containerID="9aa33c129de91e465fc01a3c32a871c7dde565d83df54f7a9239fde0e26db89b" exitCode=0 Jan 28 15:06:36 crc kubenswrapper[4981]: I0128 15:06:36.087633 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnbwh" event={"ID":"10fb71f7-ebd9-4ce7-91e8-cabe64948d68","Type":"ContainerDied","Data":"9aa33c129de91e465fc01a3c32a871c7dde565d83df54f7a9239fde0e26db89b"} Jan 28 15:06:36 crc kubenswrapper[4981]: I0128 15:06:36.090603 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e152bac2-8343-44cd-8df7-659fc89ad725","Type":"ContainerStarted","Data":"c650b9be57db7672536359204d83fe306bed014fce01cc533d8ec42f92dfd9f6"} Jan 28 15:06:36 crc kubenswrapper[4981]: I0128 15:06:36.090659 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e152bac2-8343-44cd-8df7-659fc89ad725","Type":"ContainerStarted","Data":"81c8b981c92a2a832f967f5545a128c99cc7c2754e123af3879e176e732181e0"} Jan 28 15:06:36 crc kubenswrapper[4981]: I0128 15:06:36.126487 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.126463118 podStartE2EDuration="2.126463118s" podCreationTimestamp="2026-01-28 15:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:06:36.126176718 +0000 UTC m=+207.578334959" watchObservedRunningTime="2026-01-28 15:06:36.126463118 +0000 UTC m=+207.578621349" Jan 28 15:06:37 crc kubenswrapper[4981]: I0128 15:06:37.102356 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnbwh" event={"ID":"10fb71f7-ebd9-4ce7-91e8-cabe64948d68","Type":"ContainerStarted","Data":"214cb5050b07c68d3f285bda2e97e86b258f47d6a99ec8a4cf495965030c95b1"} Jan 28 15:06:37 crc kubenswrapper[4981]: I0128 15:06:37.105737 4981 generic.go:334] "Generic (PLEG): container finished" podID="a4be2641-d6f3-4f86-ac61-e53d94db16c4" containerID="f4ba4982e5fd8d8718c836ef55def57be552700b2ecf55731de754ad712aacfb" exitCode=0 Jan 28 15:06:37 crc kubenswrapper[4981]: I0128 15:06:37.105810 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nm89" event={"ID":"a4be2641-d6f3-4f86-ac61-e53d94db16c4","Type":"ContainerDied","Data":"f4ba4982e5fd8d8718c836ef55def57be552700b2ecf55731de754ad712aacfb"} Jan 28 15:06:37 crc kubenswrapper[4981]: I0128 15:06:37.121072 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rnbwh" podStartSLOduration=4.197087226 podStartE2EDuration="49.121053405s" podCreationTimestamp="2026-01-28 15:05:48 +0000 UTC" firstStartedPulling="2026-01-28 15:05:51.612584902 +0000 UTC m=+163.064743143" lastFinishedPulling="2026-01-28 15:06:36.536551081 +0000 UTC m=+207.988709322" observedRunningTime="2026-01-28 15:06:37.119262455 +0000 UTC m=+208.571420696" watchObservedRunningTime="2026-01-28 15:06:37.121053405 +0000 UTC m=+208.573211646" Jan 28 15:06:38 crc kubenswrapper[4981]: I0128 15:06:38.041474 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:06:38 crc kubenswrapper[4981]: I0128 15:06:38.041606 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:06:38 crc kubenswrapper[4981]: I0128 15:06:38.112124 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nm89" event={"ID":"a4be2641-d6f3-4f86-ac61-e53d94db16c4","Type":"ContainerStarted","Data":"e980b196718ee2dd1257a93e1cca53ede6a6761936cbb884afbb6b0489dd4b65"} Jan 28 15:06:38 crc kubenswrapper[4981]: I0128 15:06:38.195642 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:06:38 crc kubenswrapper[4981]: I0128 15:06:38.645220 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:06:38 crc kubenswrapper[4981]: I0128 15:06:38.645283 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:06:38 crc kubenswrapper[4981]: I0128 15:06:38.686743 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:06:39 crc kubenswrapper[4981]: I0128 15:06:39.120914 4981 generic.go:334] "Generic (PLEG): container finished" podID="20cbd927-94b5-452e-a139-8d797dd4f4f7" containerID="fac89806857ac938a954c7e5c1269b31b93472b2515b59501914cf920cd995c3" exitCode=0 Jan 28 15:06:39 crc kubenswrapper[4981]: I0128 15:06:39.121002 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v42ln" event={"ID":"20cbd927-94b5-452e-a139-8d797dd4f4f7","Type":"ContainerDied","Data":"fac89806857ac938a954c7e5c1269b31b93472b2515b59501914cf920cd995c3"} Jan 28 15:06:39 crc kubenswrapper[4981]: I0128 15:06:39.174605 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2nm89" podStartSLOduration=2.656733495 podStartE2EDuration="51.174574022s" podCreationTimestamp="2026-01-28 15:05:48 +0000 UTC" firstStartedPulling="2026-01-28 15:05:49.40329399 +0000 UTC m=+160.855452231" lastFinishedPulling="2026-01-28 15:06:37.921134527 +0000 UTC m=+209.373292758" observedRunningTime="2026-01-28 15:06:39.170755546 +0000 UTC m=+210.622913797" watchObservedRunningTime="2026-01-28 15:06:39.174574022 +0000 UTC m=+210.626732273" Jan 28 15:06:39 crc kubenswrapper[4981]: I0128 15:06:39.178950 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:06:40 crc kubenswrapper[4981]: I0128 15:06:40.141305 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v42ln" event={"ID":"20cbd927-94b5-452e-a139-8d797dd4f4f7","Type":"ContainerStarted","Data":"7b7250fca34e373bbeec0f2d174bd84c9c32b8c4202cdecce0e4e641c0f2d188"} Jan 28 15:06:40 crc kubenswrapper[4981]: I0128 15:06:40.145596 4981 generic.go:334] "Generic (PLEG): container finished" podID="4103aaf7-a794-43ec-be3d-3c72aff08400" containerID="15ea0151facbbee78fab8324d720a356c286d5c61958db4b514397dd32f08cbc" exitCode=0 Jan 28 15:06:40 crc kubenswrapper[4981]: I0128 15:06:40.145735 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2mhx" event={"ID":"4103aaf7-a794-43ec-be3d-3c72aff08400","Type":"ContainerDied","Data":"15ea0151facbbee78fab8324d720a356c286d5c61958db4b514397dd32f08cbc"} Jan 28 15:06:40 crc kubenswrapper[4981]: I0128 15:06:40.160792 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v42ln" podStartSLOduration=2.244018206 podStartE2EDuration="50.160772181s" podCreationTimestamp="2026-01-28 15:05:50 +0000 UTC" firstStartedPulling="2026-01-28 15:05:51.678917924 +0000 UTC m=+163.131076165" lastFinishedPulling="2026-01-28 15:06:39.595671899 +0000 UTC m=+211.047830140" observedRunningTime="2026-01-28 15:06:40.156732648 +0000 UTC m=+211.608890889" watchObservedRunningTime="2026-01-28 15:06:40.160772181 +0000 UTC m=+211.612930422" Jan 28 15:06:40 crc kubenswrapper[4981]: I0128 15:06:40.928967 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:06:40 crc kubenswrapper[4981]: I0128 15:06:40.929400 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:06:41 crc kubenswrapper[4981]: I0128 15:06:41.972026 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-v42ln" podUID="20cbd927-94b5-452e-a139-8d797dd4f4f7" containerName="registry-server" probeResult="failure" output=< Jan 28 15:06:41 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Jan 28 15:06:41 crc kubenswrapper[4981]: > Jan 28 15:06:46 crc kubenswrapper[4981]: I0128 15:06:46.182248 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2mhx" event={"ID":"4103aaf7-a794-43ec-be3d-3c72aff08400","Type":"ContainerStarted","Data":"6b29b65ce58f19eacf187ca5e5a0d87265a351528c84bf2d9d4e92803e9310dc"} Jan 28 15:06:46 crc kubenswrapper[4981]: I0128 15:06:46.184535 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqqn8" event={"ID":"b1101605-52b4-4c83-9958-11c0fe93d5e3","Type":"ContainerStarted","Data":"89e7039bbb5b979becf93ca82135aac580b5f1a05e50916af747edf9e4a8b7e3"} Jan 28 15:06:46 crc kubenswrapper[4981]: I0128 15:06:46.187235 4981 generic.go:334] "Generic (PLEG): container finished" podID="271ceb24-1e9d-44c5-a8d2-168c2b34d81a" containerID="ba125ab791b322a3d11c7b03e3454539fcfa9c501afa3e94b15dad0bdf1a7912" exitCode=0 Jan 28 15:06:46 crc kubenswrapper[4981]: I0128 15:06:46.187304 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57gl4" event={"ID":"271ceb24-1e9d-44c5-a8d2-168c2b34d81a","Type":"ContainerDied","Data":"ba125ab791b322a3d11c7b03e3454539fcfa9c501afa3e94b15dad0bdf1a7912"} Jan 28 15:06:46 crc kubenswrapper[4981]: I0128 15:06:46.189165 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2h52" event={"ID":"0d2bf303-3b5f-4c86-bf7e-eeef12979424","Type":"ContainerStarted","Data":"d26ad41d99bc4a72ee23542798cb19df8ff020d0aa8cd5a1cf8943daa87f5dab"} Jan 28 15:06:46 crc kubenswrapper[4981]: I0128 15:06:46.204945 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-t2mhx" podStartSLOduration=9.214095049 podStartE2EDuration="58.204920839s" podCreationTimestamp="2026-01-28 15:05:48 +0000 UTC" firstStartedPulling="2026-01-28 15:05:51.684277357 +0000 UTC m=+163.136435598" lastFinishedPulling="2026-01-28 15:06:40.675103147 +0000 UTC m=+212.127261388" observedRunningTime="2026-01-28 15:06:46.2004133 +0000 UTC m=+217.652571551" watchObservedRunningTime="2026-01-28 15:06:46.204920839 +0000 UTC m=+217.657079080" Jan 28 15:06:47 crc kubenswrapper[4981]: I0128 15:06:47.204241 4981 generic.go:334] "Generic (PLEG): container finished" podID="b1101605-52b4-4c83-9958-11c0fe93d5e3" containerID="89e7039bbb5b979becf93ca82135aac580b5f1a05e50916af747edf9e4a8b7e3" exitCode=0 Jan 28 15:06:47 crc kubenswrapper[4981]: I0128 15:06:47.204343 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqqn8" event={"ID":"b1101605-52b4-4c83-9958-11c0fe93d5e3","Type":"ContainerDied","Data":"89e7039bbb5b979becf93ca82135aac580b5f1a05e50916af747edf9e4a8b7e3"} Jan 28 15:06:47 crc kubenswrapper[4981]: I0128 15:06:47.209320 4981 generic.go:334] "Generic (PLEG): container finished" podID="0d2bf303-3b5f-4c86-bf7e-eeef12979424" containerID="d26ad41d99bc4a72ee23542798cb19df8ff020d0aa8cd5a1cf8943daa87f5dab" exitCode=0 Jan 28 15:06:47 crc kubenswrapper[4981]: I0128 15:06:47.209382 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2h52" event={"ID":"0d2bf303-3b5f-4c86-bf7e-eeef12979424","Type":"ContainerDied","Data":"d26ad41d99bc4a72ee23542798cb19df8ff020d0aa8cd5a1cf8943daa87f5dab"} Jan 28 15:06:48 crc kubenswrapper[4981]: I0128 15:06:48.216336 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57gl4" event={"ID":"271ceb24-1e9d-44c5-a8d2-168c2b34d81a","Type":"ContainerStarted","Data":"f29f1c0bba7871a14ce5e53f6089b4bb73e2de6cf196d30bb6abd8fb6c90c879"} Jan 28 15:06:48 crc kubenswrapper[4981]: I0128 15:06:48.219624 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2h52" event={"ID":"0d2bf303-3b5f-4c86-bf7e-eeef12979424","Type":"ContainerStarted","Data":"b614ef24d274093535b0043e4a08ecd3ccc420e18538432f01c7fd72b657c9d0"} Jan 28 15:06:48 crc kubenswrapper[4981]: I0128 15:06:48.221726 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqqn8" event={"ID":"b1101605-52b4-4c83-9958-11c0fe93d5e3","Type":"ContainerStarted","Data":"4fd51047e389a4ad226b82e520ca1486e074cc00078e47301625638874d5d73b"} Jan 28 15:06:48 crc kubenswrapper[4981]: I0128 15:06:48.231538 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-57gl4" podStartSLOduration=5.042547189 podStartE2EDuration="58.231515938s" podCreationTimestamp="2026-01-28 15:05:50 +0000 UTC" firstStartedPulling="2026-01-28 15:05:53.767357716 +0000 UTC m=+165.219515957" lastFinishedPulling="2026-01-28 15:06:46.956326425 +0000 UTC m=+218.408484706" observedRunningTime="2026-01-28 15:06:48.230016279 +0000 UTC m=+219.682174520" watchObservedRunningTime="2026-01-28 15:06:48.231515938 +0000 UTC m=+219.683674189" Jan 28 15:06:48 crc kubenswrapper[4981]: I0128 15:06:48.288529 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dqqn8" podStartSLOduration=3.19455213 podStartE2EDuration="57.288510121s" podCreationTimestamp="2026-01-28 15:05:51 +0000 UTC" firstStartedPulling="2026-01-28 15:05:53.743382199 +0000 UTC m=+165.195540440" lastFinishedPulling="2026-01-28 15:06:47.83734018 +0000 UTC m=+219.289498431" observedRunningTime="2026-01-28 15:06:48.28516784 +0000 UTC m=+219.737326081" watchObservedRunningTime="2026-01-28 15:06:48.288510121 +0000 UTC m=+219.740668362" Jan 28 15:06:48 crc kubenswrapper[4981]: I0128 15:06:48.329300 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k2h52" podStartSLOduration=3.286154922 podStartE2EDuration="57.329271667s" podCreationTimestamp="2026-01-28 15:05:51 +0000 UTC" firstStartedPulling="2026-01-28 15:05:53.768362931 +0000 UTC m=+165.220521172" lastFinishedPulling="2026-01-28 15:06:47.811479666 +0000 UTC m=+219.263637917" observedRunningTime="2026-01-28 15:06:48.326692942 +0000 UTC m=+219.778851173" watchObservedRunningTime="2026-01-28 15:06:48.329271667 +0000 UTC m=+219.781429918" Jan 28 15:06:48 crc kubenswrapper[4981]: I0128 15:06:48.424928 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:06:48 crc kubenswrapper[4981]: I0128 15:06:48.424991 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:06:48 crc kubenswrapper[4981]: I0128 15:06:48.471823 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:06:48 crc kubenswrapper[4981]: I0128 15:06:48.691320 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:06:48 crc kubenswrapper[4981]: I0128 15:06:48.849486 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:06:48 crc kubenswrapper[4981]: I0128 15:06:48.849563 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:06:48 crc kubenswrapper[4981]: I0128 15:06:48.887080 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:06:49 crc kubenswrapper[4981]: I0128 15:06:49.262832 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:06:49 crc kubenswrapper[4981]: I0128 15:06:49.897632 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:06:49 crc kubenswrapper[4981]: I0128 15:06:49.897728 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:06:49 crc kubenswrapper[4981]: I0128 15:06:49.897803 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:06:49 crc kubenswrapper[4981]: I0128 15:06:49.898711 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:06:49 crc kubenswrapper[4981]: I0128 15:06:49.898906 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6" gracePeriod=600 Jan 28 15:06:50 crc kubenswrapper[4981]: I0128 15:06:50.506377 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2nm89"] Jan 28 15:06:50 crc kubenswrapper[4981]: I0128 15:06:50.969934 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:06:51 crc kubenswrapper[4981]: I0128 15:06:51.013824 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:06:51 crc kubenswrapper[4981]: I0128 15:06:51.239671 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6" exitCode=0 Jan 28 15:06:51 crc kubenswrapper[4981]: I0128 15:06:51.239731 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6"} Jan 28 15:06:51 crc kubenswrapper[4981]: I0128 15:06:51.239967 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2nm89" podUID="a4be2641-d6f3-4f86-ac61-e53d94db16c4" containerName="registry-server" containerID="cri-o://e980b196718ee2dd1257a93e1cca53ede6a6761936cbb884afbb6b0489dd4b65" gracePeriod=2 Jan 28 15:06:51 crc kubenswrapper[4981]: I0128 15:06:51.345508 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:06:51 crc kubenswrapper[4981]: I0128 15:06:51.345778 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:06:51 crc kubenswrapper[4981]: I0128 15:06:51.387866 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:06:51 crc kubenswrapper[4981]: I0128 15:06:51.436428 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:06:51 crc kubenswrapper[4981]: I0128 15:06:51.436506 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:06:51 crc kubenswrapper[4981]: I0128 15:06:51.608777 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:06:51 crc kubenswrapper[4981]: I0128 15:06:51.608871 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:06:52 crc kubenswrapper[4981]: I0128 15:06:52.246840 4981 generic.go:334] "Generic (PLEG): container finished" podID="a4be2641-d6f3-4f86-ac61-e53d94db16c4" containerID="e980b196718ee2dd1257a93e1cca53ede6a6761936cbb884afbb6b0489dd4b65" exitCode=0 Jan 28 15:06:52 crc kubenswrapper[4981]: I0128 15:06:52.246925 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nm89" event={"ID":"a4be2641-d6f3-4f86-ac61-e53d94db16c4","Type":"ContainerDied","Data":"e980b196718ee2dd1257a93e1cca53ede6a6761936cbb884afbb6b0489dd4b65"} Jan 28 15:06:52 crc kubenswrapper[4981]: I0128 15:06:52.286063 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:06:52 crc kubenswrapper[4981]: I0128 15:06:52.482328 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dqqn8" podUID="b1101605-52b4-4c83-9958-11c0fe93d5e3" containerName="registry-server" probeResult="failure" output=< Jan 28 15:06:52 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Jan 28 15:06:52 crc kubenswrapper[4981]: > Jan 28 15:06:52 crc kubenswrapper[4981]: I0128 15:06:52.660347 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k2h52" podUID="0d2bf303-3b5f-4c86-bf7e-eeef12979424" containerName="registry-server" probeResult="failure" output=< Jan 28 15:06:52 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Jan 28 15:06:52 crc kubenswrapper[4981]: > Jan 28 15:06:52 crc kubenswrapper[4981]: I0128 15:06:52.919270 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.082872 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4be2641-d6f3-4f86-ac61-e53d94db16c4-catalog-content\") pod \"a4be2641-d6f3-4f86-ac61-e53d94db16c4\" (UID: \"a4be2641-d6f3-4f86-ac61-e53d94db16c4\") " Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.082946 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdqdn\" (UniqueName: \"kubernetes.io/projected/a4be2641-d6f3-4f86-ac61-e53d94db16c4-kube-api-access-jdqdn\") pod \"a4be2641-d6f3-4f86-ac61-e53d94db16c4\" (UID: \"a4be2641-d6f3-4f86-ac61-e53d94db16c4\") " Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.083026 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4be2641-d6f3-4f86-ac61-e53d94db16c4-utilities\") pod \"a4be2641-d6f3-4f86-ac61-e53d94db16c4\" (UID: \"a4be2641-d6f3-4f86-ac61-e53d94db16c4\") " Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.083986 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4be2641-d6f3-4f86-ac61-e53d94db16c4-utilities" (OuterVolumeSpecName: "utilities") pod "a4be2641-d6f3-4f86-ac61-e53d94db16c4" (UID: "a4be2641-d6f3-4f86-ac61-e53d94db16c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.092450 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4be2641-d6f3-4f86-ac61-e53d94db16c4-kube-api-access-jdqdn" (OuterVolumeSpecName: "kube-api-access-jdqdn") pod "a4be2641-d6f3-4f86-ac61-e53d94db16c4" (UID: "a4be2641-d6f3-4f86-ac61-e53d94db16c4"). InnerVolumeSpecName "kube-api-access-jdqdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.138391 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4be2641-d6f3-4f86-ac61-e53d94db16c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4be2641-d6f3-4f86-ac61-e53d94db16c4" (UID: "a4be2641-d6f3-4f86-ac61-e53d94db16c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.184524 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4be2641-d6f3-4f86-ac61-e53d94db16c4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.184575 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdqdn\" (UniqueName: \"kubernetes.io/projected/a4be2641-d6f3-4f86-ac61-e53d94db16c4-kube-api-access-jdqdn\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.184591 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4be2641-d6f3-4f86-ac61-e53d94db16c4-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.254452 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2nm89" event={"ID":"a4be2641-d6f3-4f86-ac61-e53d94db16c4","Type":"ContainerDied","Data":"6a596b4bd638c9d2f0873be66595bd88bc7cb2a1cdfdb72f7e28d775ff8ec510"} Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.254515 4981 scope.go:117] "RemoveContainer" containerID="e980b196718ee2dd1257a93e1cca53ede6a6761936cbb884afbb6b0489dd4b65" Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.254881 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2nm89" Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.257351 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"a64a7e643e94a4cd76d766a4c3449bca81854c0a93bd4d8a5c7f5aa7c7eb50b8"} Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.277157 4981 scope.go:117] "RemoveContainer" containerID="f4ba4982e5fd8d8718c836ef55def57be552700b2ecf55731de754ad712aacfb" Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.295618 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2nm89"] Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.301021 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2nm89"] Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.312582 4981 scope.go:117] "RemoveContainer" containerID="ae826ed6bf393ef698e0ccbf272115748ab6f73c755fbbbeee3b32e7b0c17b92" Jan 28 15:06:53 crc kubenswrapper[4981]: I0128 15:06:53.327027 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4be2641-d6f3-4f86-ac61-e53d94db16c4" path="/var/lib/kubelet/pods/a4be2641-d6f3-4f86-ac61-e53d94db16c4/volumes" Jan 28 15:06:56 crc kubenswrapper[4981]: I0128 15:06:56.308013 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-57gl4"] Jan 28 15:06:56 crc kubenswrapper[4981]: I0128 15:06:56.308944 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-57gl4" podUID="271ceb24-1e9d-44c5-a8d2-168c2b34d81a" containerName="registry-server" containerID="cri-o://f29f1c0bba7871a14ce5e53f6089b4bb73e2de6cf196d30bb6abd8fb6c90c879" gracePeriod=2 Jan 28 15:06:56 crc kubenswrapper[4981]: I0128 15:06:56.664404 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:06:56 crc kubenswrapper[4981]: I0128 15:06:56.837528 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-utilities\") pod \"271ceb24-1e9d-44c5-a8d2-168c2b34d81a\" (UID: \"271ceb24-1e9d-44c5-a8d2-168c2b34d81a\") " Jan 28 15:06:56 crc kubenswrapper[4981]: I0128 15:06:56.837649 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-catalog-content\") pod \"271ceb24-1e9d-44c5-a8d2-168c2b34d81a\" (UID: \"271ceb24-1e9d-44c5-a8d2-168c2b34d81a\") " Jan 28 15:06:56 crc kubenswrapper[4981]: I0128 15:06:56.837721 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5b96\" (UniqueName: \"kubernetes.io/projected/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-kube-api-access-x5b96\") pod \"271ceb24-1e9d-44c5-a8d2-168c2b34d81a\" (UID: \"271ceb24-1e9d-44c5-a8d2-168c2b34d81a\") " Jan 28 15:06:56 crc kubenswrapper[4981]: I0128 15:06:56.840285 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-utilities" (OuterVolumeSpecName: "utilities") pod "271ceb24-1e9d-44c5-a8d2-168c2b34d81a" (UID: "271ceb24-1e9d-44c5-a8d2-168c2b34d81a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:06:56 crc kubenswrapper[4981]: I0128 15:06:56.846659 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-kube-api-access-x5b96" (OuterVolumeSpecName: "kube-api-access-x5b96") pod "271ceb24-1e9d-44c5-a8d2-168c2b34d81a" (UID: "271ceb24-1e9d-44c5-a8d2-168c2b34d81a"). InnerVolumeSpecName "kube-api-access-x5b96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:06:56 crc kubenswrapper[4981]: I0128 15:06:56.889176 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "271ceb24-1e9d-44c5-a8d2-168c2b34d81a" (UID: "271ceb24-1e9d-44c5-a8d2-168c2b34d81a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:06:56 crc kubenswrapper[4981]: I0128 15:06:56.938731 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:56 crc kubenswrapper[4981]: I0128 15:06:56.938776 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5b96\" (UniqueName: \"kubernetes.io/projected/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-kube-api-access-x5b96\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:56 crc kubenswrapper[4981]: I0128 15:06:56.938793 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/271ceb24-1e9d-44c5-a8d2-168c2b34d81a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:06:57 crc kubenswrapper[4981]: I0128 15:06:57.285707 4981 generic.go:334] "Generic (PLEG): container finished" podID="271ceb24-1e9d-44c5-a8d2-168c2b34d81a" containerID="f29f1c0bba7871a14ce5e53f6089b4bb73e2de6cf196d30bb6abd8fb6c90c879" exitCode=0 Jan 28 15:06:57 crc kubenswrapper[4981]: I0128 15:06:57.285785 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57gl4" event={"ID":"271ceb24-1e9d-44c5-a8d2-168c2b34d81a","Type":"ContainerDied","Data":"f29f1c0bba7871a14ce5e53f6089b4bb73e2de6cf196d30bb6abd8fb6c90c879"} Jan 28 15:06:57 crc kubenswrapper[4981]: I0128 15:06:57.285832 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-57gl4" event={"ID":"271ceb24-1e9d-44c5-a8d2-168c2b34d81a","Type":"ContainerDied","Data":"014e993ce6e1240245e38e89895f09a39d71f6052643fce5b574553fe21c1c0c"} Jan 28 15:06:57 crc kubenswrapper[4981]: I0128 15:06:57.285863 4981 scope.go:117] "RemoveContainer" containerID="f29f1c0bba7871a14ce5e53f6089b4bb73e2de6cf196d30bb6abd8fb6c90c879" Jan 28 15:06:57 crc kubenswrapper[4981]: I0128 15:06:57.286050 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-57gl4" Jan 28 15:06:57 crc kubenswrapper[4981]: I0128 15:06:57.317491 4981 scope.go:117] "RemoveContainer" containerID="ba125ab791b322a3d11c7b03e3454539fcfa9c501afa3e94b15dad0bdf1a7912" Jan 28 15:06:57 crc kubenswrapper[4981]: I0128 15:06:57.330127 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-57gl4"] Jan 28 15:06:57 crc kubenswrapper[4981]: I0128 15:06:57.330172 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-57gl4"] Jan 28 15:06:57 crc kubenswrapper[4981]: I0128 15:06:57.340830 4981 scope.go:117] "RemoveContainer" containerID="79b62f4589aba8d2615592c6ddfc12dd44f4bc66b217fbb4d212050bebf69f5c" Jan 28 15:06:57 crc kubenswrapper[4981]: I0128 15:06:57.359829 4981 scope.go:117] "RemoveContainer" containerID="f29f1c0bba7871a14ce5e53f6089b4bb73e2de6cf196d30bb6abd8fb6c90c879" Jan 28 15:06:57 crc kubenswrapper[4981]: E0128 15:06:57.360941 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f29f1c0bba7871a14ce5e53f6089b4bb73e2de6cf196d30bb6abd8fb6c90c879\": container with ID starting with f29f1c0bba7871a14ce5e53f6089b4bb73e2de6cf196d30bb6abd8fb6c90c879 not found: ID does not exist" containerID="f29f1c0bba7871a14ce5e53f6089b4bb73e2de6cf196d30bb6abd8fb6c90c879" Jan 28 15:06:57 crc kubenswrapper[4981]: I0128 15:06:57.361071 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f29f1c0bba7871a14ce5e53f6089b4bb73e2de6cf196d30bb6abd8fb6c90c879"} err="failed to get container status \"f29f1c0bba7871a14ce5e53f6089b4bb73e2de6cf196d30bb6abd8fb6c90c879\": rpc error: code = NotFound desc = could not find container \"f29f1c0bba7871a14ce5e53f6089b4bb73e2de6cf196d30bb6abd8fb6c90c879\": container with ID starting with f29f1c0bba7871a14ce5e53f6089b4bb73e2de6cf196d30bb6abd8fb6c90c879 not found: ID does not exist" Jan 28 15:06:57 crc kubenswrapper[4981]: I0128 15:06:57.361183 4981 scope.go:117] "RemoveContainer" containerID="ba125ab791b322a3d11c7b03e3454539fcfa9c501afa3e94b15dad0bdf1a7912" Jan 28 15:06:57 crc kubenswrapper[4981]: E0128 15:06:57.361651 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba125ab791b322a3d11c7b03e3454539fcfa9c501afa3e94b15dad0bdf1a7912\": container with ID starting with ba125ab791b322a3d11c7b03e3454539fcfa9c501afa3e94b15dad0bdf1a7912 not found: ID does not exist" containerID="ba125ab791b322a3d11c7b03e3454539fcfa9c501afa3e94b15dad0bdf1a7912" Jan 28 15:06:57 crc kubenswrapper[4981]: I0128 15:06:57.361676 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba125ab791b322a3d11c7b03e3454539fcfa9c501afa3e94b15dad0bdf1a7912"} err="failed to get container status \"ba125ab791b322a3d11c7b03e3454539fcfa9c501afa3e94b15dad0bdf1a7912\": rpc error: code = NotFound desc = could not find container \"ba125ab791b322a3d11c7b03e3454539fcfa9c501afa3e94b15dad0bdf1a7912\": container with ID starting with ba125ab791b322a3d11c7b03e3454539fcfa9c501afa3e94b15dad0bdf1a7912 not found: ID does not exist" Jan 28 15:06:57 crc kubenswrapper[4981]: I0128 15:06:57.361692 4981 scope.go:117] "RemoveContainer" containerID="79b62f4589aba8d2615592c6ddfc12dd44f4bc66b217fbb4d212050bebf69f5c" Jan 28 15:06:57 crc kubenswrapper[4981]: E0128 15:06:57.361989 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b62f4589aba8d2615592c6ddfc12dd44f4bc66b217fbb4d212050bebf69f5c\": container with ID starting with 79b62f4589aba8d2615592c6ddfc12dd44f4bc66b217fbb4d212050bebf69f5c not found: ID does not exist" containerID="79b62f4589aba8d2615592c6ddfc12dd44f4bc66b217fbb4d212050bebf69f5c" Jan 28 15:06:57 crc kubenswrapper[4981]: I0128 15:06:57.362014 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b62f4589aba8d2615592c6ddfc12dd44f4bc66b217fbb4d212050bebf69f5c"} err="failed to get container status \"79b62f4589aba8d2615592c6ddfc12dd44f4bc66b217fbb4d212050bebf69f5c\": rpc error: code = NotFound desc = could not find container \"79b62f4589aba8d2615592c6ddfc12dd44f4bc66b217fbb4d212050bebf69f5c\": container with ID starting with 79b62f4589aba8d2615592c6ddfc12dd44f4bc66b217fbb4d212050bebf69f5c not found: ID does not exist" Jan 28 15:06:58 crc kubenswrapper[4981]: I0128 15:06:58.895229 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:06:59 crc kubenswrapper[4981]: I0128 15:06:59.331642 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="271ceb24-1e9d-44c5-a8d2-168c2b34d81a" path="/var/lib/kubelet/pods/271ceb24-1e9d-44c5-a8d2-168c2b34d81a/volumes" Jan 28 15:06:59 crc kubenswrapper[4981]: I0128 15:06:59.719913 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t2mhx"] Jan 28 15:06:59 crc kubenswrapper[4981]: I0128 15:06:59.720494 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-t2mhx" podUID="4103aaf7-a794-43ec-be3d-3c72aff08400" containerName="registry-server" containerID="cri-o://6b29b65ce58f19eacf187ca5e5a0d87265a351528c84bf2d9d4e92803e9310dc" gracePeriod=2 Jan 28 15:07:00 crc kubenswrapper[4981]: I0128 15:07:00.159341 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lm8cf"] Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.258619 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.311222 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dk7j\" (UniqueName: \"kubernetes.io/projected/4103aaf7-a794-43ec-be3d-3c72aff08400-kube-api-access-7dk7j\") pod \"4103aaf7-a794-43ec-be3d-3c72aff08400\" (UID: \"4103aaf7-a794-43ec-be3d-3c72aff08400\") " Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.311400 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4103aaf7-a794-43ec-be3d-3c72aff08400-catalog-content\") pod \"4103aaf7-a794-43ec-be3d-3c72aff08400\" (UID: \"4103aaf7-a794-43ec-be3d-3c72aff08400\") " Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.311444 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4103aaf7-a794-43ec-be3d-3c72aff08400-utilities\") pod \"4103aaf7-a794-43ec-be3d-3c72aff08400\" (UID: \"4103aaf7-a794-43ec-be3d-3c72aff08400\") " Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.312879 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4103aaf7-a794-43ec-be3d-3c72aff08400-utilities" (OuterVolumeSpecName: "utilities") pod "4103aaf7-a794-43ec-be3d-3c72aff08400" (UID: "4103aaf7-a794-43ec-be3d-3c72aff08400"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.312911 4981 generic.go:334] "Generic (PLEG): container finished" podID="4103aaf7-a794-43ec-be3d-3c72aff08400" containerID="6b29b65ce58f19eacf187ca5e5a0d87265a351528c84bf2d9d4e92803e9310dc" exitCode=0 Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.313025 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-t2mhx" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.313031 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2mhx" event={"ID":"4103aaf7-a794-43ec-be3d-3c72aff08400","Type":"ContainerDied","Data":"6b29b65ce58f19eacf187ca5e5a0d87265a351528c84bf2d9d4e92803e9310dc"} Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.313080 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-t2mhx" event={"ID":"4103aaf7-a794-43ec-be3d-3c72aff08400","Type":"ContainerDied","Data":"a2de7b41638d1c98858be70845c6f42912e172ff75ef70cb1f25e2637bb8eec6"} Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.313103 4981 scope.go:117] "RemoveContainer" containerID="6b29b65ce58f19eacf187ca5e5a0d87265a351528c84bf2d9d4e92803e9310dc" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.317503 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4103aaf7-a794-43ec-be3d-3c72aff08400-kube-api-access-7dk7j" (OuterVolumeSpecName: "kube-api-access-7dk7j") pod "4103aaf7-a794-43ec-be3d-3c72aff08400" (UID: "4103aaf7-a794-43ec-be3d-3c72aff08400"). InnerVolumeSpecName "kube-api-access-7dk7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.350770 4981 scope.go:117] "RemoveContainer" containerID="15ea0151facbbee78fab8324d720a356c286d5c61958db4b514397dd32f08cbc" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.386046 4981 scope.go:117] "RemoveContainer" containerID="88eee054fb0e2801cb41bdb834370fce5abd38f7838a7bd69d7e71b2e7ded9fb" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.402379 4981 scope.go:117] "RemoveContainer" containerID="6b29b65ce58f19eacf187ca5e5a0d87265a351528c84bf2d9d4e92803e9310dc" Jan 28 15:07:01 crc kubenswrapper[4981]: E0128 15:07:01.402890 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b29b65ce58f19eacf187ca5e5a0d87265a351528c84bf2d9d4e92803e9310dc\": container with ID starting with 6b29b65ce58f19eacf187ca5e5a0d87265a351528c84bf2d9d4e92803e9310dc not found: ID does not exist" containerID="6b29b65ce58f19eacf187ca5e5a0d87265a351528c84bf2d9d4e92803e9310dc" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.402957 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b29b65ce58f19eacf187ca5e5a0d87265a351528c84bf2d9d4e92803e9310dc"} err="failed to get container status \"6b29b65ce58f19eacf187ca5e5a0d87265a351528c84bf2d9d4e92803e9310dc\": rpc error: code = NotFound desc = could not find container \"6b29b65ce58f19eacf187ca5e5a0d87265a351528c84bf2d9d4e92803e9310dc\": container with ID starting with 6b29b65ce58f19eacf187ca5e5a0d87265a351528c84bf2d9d4e92803e9310dc not found: ID does not exist" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.403000 4981 scope.go:117] "RemoveContainer" containerID="15ea0151facbbee78fab8324d720a356c286d5c61958db4b514397dd32f08cbc" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.405882 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4103aaf7-a794-43ec-be3d-3c72aff08400-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4103aaf7-a794-43ec-be3d-3c72aff08400" (UID: "4103aaf7-a794-43ec-be3d-3c72aff08400"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:07:01 crc kubenswrapper[4981]: E0128 15:07:01.406530 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15ea0151facbbee78fab8324d720a356c286d5c61958db4b514397dd32f08cbc\": container with ID starting with 15ea0151facbbee78fab8324d720a356c286d5c61958db4b514397dd32f08cbc not found: ID does not exist" containerID="15ea0151facbbee78fab8324d720a356c286d5c61958db4b514397dd32f08cbc" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.406572 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15ea0151facbbee78fab8324d720a356c286d5c61958db4b514397dd32f08cbc"} err="failed to get container status \"15ea0151facbbee78fab8324d720a356c286d5c61958db4b514397dd32f08cbc\": rpc error: code = NotFound desc = could not find container \"15ea0151facbbee78fab8324d720a356c286d5c61958db4b514397dd32f08cbc\": container with ID starting with 15ea0151facbbee78fab8324d720a356c286d5c61958db4b514397dd32f08cbc not found: ID does not exist" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.406605 4981 scope.go:117] "RemoveContainer" containerID="88eee054fb0e2801cb41bdb834370fce5abd38f7838a7bd69d7e71b2e7ded9fb" Jan 28 15:07:01 crc kubenswrapper[4981]: E0128 15:07:01.407113 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88eee054fb0e2801cb41bdb834370fce5abd38f7838a7bd69d7e71b2e7ded9fb\": container with ID starting with 88eee054fb0e2801cb41bdb834370fce5abd38f7838a7bd69d7e71b2e7ded9fb not found: ID does not exist" containerID="88eee054fb0e2801cb41bdb834370fce5abd38f7838a7bd69d7e71b2e7ded9fb" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.407153 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88eee054fb0e2801cb41bdb834370fce5abd38f7838a7bd69d7e71b2e7ded9fb"} err="failed to get container status \"88eee054fb0e2801cb41bdb834370fce5abd38f7838a7bd69d7e71b2e7ded9fb\": rpc error: code = NotFound desc = could not find container \"88eee054fb0e2801cb41bdb834370fce5abd38f7838a7bd69d7e71b2e7ded9fb\": container with ID starting with 88eee054fb0e2801cb41bdb834370fce5abd38f7838a7bd69d7e71b2e7ded9fb not found: ID does not exist" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.412458 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dk7j\" (UniqueName: \"kubernetes.io/projected/4103aaf7-a794-43ec-be3d-3c72aff08400-kube-api-access-7dk7j\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.412483 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4103aaf7-a794-43ec-be3d-3c72aff08400-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.412494 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4103aaf7-a794-43ec-be3d-3c72aff08400-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.479240 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.519569 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.659232 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-t2mhx"] Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.662332 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.662486 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-t2mhx"] Jan 28 15:07:01 crc kubenswrapper[4981]: I0128 15:07:01.708333 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:07:03 crc kubenswrapper[4981]: I0128 15:07:03.327377 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4103aaf7-a794-43ec-be3d-3c72aff08400" path="/var/lib/kubelet/pods/4103aaf7-a794-43ec-be3d-3c72aff08400/volumes" Jan 28 15:07:03 crc kubenswrapper[4981]: I0128 15:07:03.709652 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2h52"] Jan 28 15:07:03 crc kubenswrapper[4981]: I0128 15:07:03.710047 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k2h52" podUID="0d2bf303-3b5f-4c86-bf7e-eeef12979424" containerName="registry-server" containerID="cri-o://b614ef24d274093535b0043e4a08ecd3ccc420e18538432f01c7fd72b657c9d0" gracePeriod=2 Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.166662 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.254986 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d2bf303-3b5f-4c86-bf7e-eeef12979424-utilities\") pod \"0d2bf303-3b5f-4c86-bf7e-eeef12979424\" (UID: \"0d2bf303-3b5f-4c86-bf7e-eeef12979424\") " Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.255243 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d2bf303-3b5f-4c86-bf7e-eeef12979424-catalog-content\") pod \"0d2bf303-3b5f-4c86-bf7e-eeef12979424\" (UID: \"0d2bf303-3b5f-4c86-bf7e-eeef12979424\") " Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.255369 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhbvz\" (UniqueName: \"kubernetes.io/projected/0d2bf303-3b5f-4c86-bf7e-eeef12979424-kube-api-access-hhbvz\") pod \"0d2bf303-3b5f-4c86-bf7e-eeef12979424\" (UID: \"0d2bf303-3b5f-4c86-bf7e-eeef12979424\") " Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.256323 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d2bf303-3b5f-4c86-bf7e-eeef12979424-utilities" (OuterVolumeSpecName: "utilities") pod "0d2bf303-3b5f-4c86-bf7e-eeef12979424" (UID: "0d2bf303-3b5f-4c86-bf7e-eeef12979424"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.260622 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d2bf303-3b5f-4c86-bf7e-eeef12979424-kube-api-access-hhbvz" (OuterVolumeSpecName: "kube-api-access-hhbvz") pod "0d2bf303-3b5f-4c86-bf7e-eeef12979424" (UID: "0d2bf303-3b5f-4c86-bf7e-eeef12979424"). InnerVolumeSpecName "kube-api-access-hhbvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.335060 4981 generic.go:334] "Generic (PLEG): container finished" podID="0d2bf303-3b5f-4c86-bf7e-eeef12979424" containerID="b614ef24d274093535b0043e4a08ecd3ccc420e18538432f01c7fd72b657c9d0" exitCode=0 Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.335110 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2h52" event={"ID":"0d2bf303-3b5f-4c86-bf7e-eeef12979424","Type":"ContainerDied","Data":"b614ef24d274093535b0043e4a08ecd3ccc420e18538432f01c7fd72b657c9d0"} Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.335137 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k2h52" event={"ID":"0d2bf303-3b5f-4c86-bf7e-eeef12979424","Type":"ContainerDied","Data":"6185a2e0cca93b7b557080f3fe1aeb0acc71738f4c884c348890ad359c5b019b"} Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.335160 4981 scope.go:117] "RemoveContainer" containerID="b614ef24d274093535b0043e4a08ecd3ccc420e18538432f01c7fd72b657c9d0" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.335226 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k2h52" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.355709 4981 scope.go:117] "RemoveContainer" containerID="d26ad41d99bc4a72ee23542798cb19df8ff020d0aa8cd5a1cf8943daa87f5dab" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.356657 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhbvz\" (UniqueName: \"kubernetes.io/projected/0d2bf303-3b5f-4c86-bf7e-eeef12979424-kube-api-access-hhbvz\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.356687 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d2bf303-3b5f-4c86-bf7e-eeef12979424-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.374842 4981 scope.go:117] "RemoveContainer" containerID="970eb8e61131d4733c0fd13fc92f2ea1bb5da88db72a6e9756d872c5764bbb42" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.393315 4981 scope.go:117] "RemoveContainer" containerID="b614ef24d274093535b0043e4a08ecd3ccc420e18538432f01c7fd72b657c9d0" Jan 28 15:07:04 crc kubenswrapper[4981]: E0128 15:07:04.393648 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b614ef24d274093535b0043e4a08ecd3ccc420e18538432f01c7fd72b657c9d0\": container with ID starting with b614ef24d274093535b0043e4a08ecd3ccc420e18538432f01c7fd72b657c9d0 not found: ID does not exist" containerID="b614ef24d274093535b0043e4a08ecd3ccc420e18538432f01c7fd72b657c9d0" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.393690 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b614ef24d274093535b0043e4a08ecd3ccc420e18538432f01c7fd72b657c9d0"} err="failed to get container status \"b614ef24d274093535b0043e4a08ecd3ccc420e18538432f01c7fd72b657c9d0\": rpc error: code = NotFound desc = could not find container \"b614ef24d274093535b0043e4a08ecd3ccc420e18538432f01c7fd72b657c9d0\": container with ID starting with b614ef24d274093535b0043e4a08ecd3ccc420e18538432f01c7fd72b657c9d0 not found: ID does not exist" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.393716 4981 scope.go:117] "RemoveContainer" containerID="d26ad41d99bc4a72ee23542798cb19df8ff020d0aa8cd5a1cf8943daa87f5dab" Jan 28 15:07:04 crc kubenswrapper[4981]: E0128 15:07:04.394063 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d26ad41d99bc4a72ee23542798cb19df8ff020d0aa8cd5a1cf8943daa87f5dab\": container with ID starting with d26ad41d99bc4a72ee23542798cb19df8ff020d0aa8cd5a1cf8943daa87f5dab not found: ID does not exist" containerID="d26ad41d99bc4a72ee23542798cb19df8ff020d0aa8cd5a1cf8943daa87f5dab" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.394117 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d26ad41d99bc4a72ee23542798cb19df8ff020d0aa8cd5a1cf8943daa87f5dab"} err="failed to get container status \"d26ad41d99bc4a72ee23542798cb19df8ff020d0aa8cd5a1cf8943daa87f5dab\": rpc error: code = NotFound desc = could not find container \"d26ad41d99bc4a72ee23542798cb19df8ff020d0aa8cd5a1cf8943daa87f5dab\": container with ID starting with d26ad41d99bc4a72ee23542798cb19df8ff020d0aa8cd5a1cf8943daa87f5dab not found: ID does not exist" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.394152 4981 scope.go:117] "RemoveContainer" containerID="970eb8e61131d4733c0fd13fc92f2ea1bb5da88db72a6e9756d872c5764bbb42" Jan 28 15:07:04 crc kubenswrapper[4981]: E0128 15:07:04.394581 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"970eb8e61131d4733c0fd13fc92f2ea1bb5da88db72a6e9756d872c5764bbb42\": container with ID starting with 970eb8e61131d4733c0fd13fc92f2ea1bb5da88db72a6e9756d872c5764bbb42 not found: ID does not exist" containerID="970eb8e61131d4733c0fd13fc92f2ea1bb5da88db72a6e9756d872c5764bbb42" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.394652 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"970eb8e61131d4733c0fd13fc92f2ea1bb5da88db72a6e9756d872c5764bbb42"} err="failed to get container status \"970eb8e61131d4733c0fd13fc92f2ea1bb5da88db72a6e9756d872c5764bbb42\": rpc error: code = NotFound desc = could not find container \"970eb8e61131d4733c0fd13fc92f2ea1bb5da88db72a6e9756d872c5764bbb42\": container with ID starting with 970eb8e61131d4733c0fd13fc92f2ea1bb5da88db72a6e9756d872c5764bbb42 not found: ID does not exist" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.400019 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d2bf303-3b5f-4c86-bf7e-eeef12979424-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d2bf303-3b5f-4c86-bf7e-eeef12979424" (UID: "0d2bf303-3b5f-4c86-bf7e-eeef12979424"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.458601 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d2bf303-3b5f-4c86-bf7e-eeef12979424-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.687826 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k2h52"] Jan 28 15:07:04 crc kubenswrapper[4981]: I0128 15:07:04.691885 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k2h52"] Jan 28 15:07:05 crc kubenswrapper[4981]: I0128 15:07:05.347611 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d2bf303-3b5f-4c86-bf7e-eeef12979424" path="/var/lib/kubelet/pods/0d2bf303-3b5f-4c86-bf7e-eeef12979424/volumes" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.611384 4981 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.613499 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946" gracePeriod=15 Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.613744 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b" gracePeriod=15 Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.613796 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4" gracePeriod=15 Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.613807 4981 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.613851 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a" gracePeriod=15 Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.613895 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f" gracePeriod=15 Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614303 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="271ceb24-1e9d-44c5-a8d2-168c2b34d81a" containerName="extract-utilities" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614339 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="271ceb24-1e9d-44c5-a8d2-168c2b34d81a" containerName="extract-utilities" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614361 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614379 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614406 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4be2641-d6f3-4f86-ac61-e53d94db16c4" containerName="extract-utilities" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614427 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4be2641-d6f3-4f86-ac61-e53d94db16c4" containerName="extract-utilities" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614457 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4be2641-d6f3-4f86-ac61-e53d94db16c4" containerName="registry-server" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614473 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4be2641-d6f3-4f86-ac61-e53d94db16c4" containerName="registry-server" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614496 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614512 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614533 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4103aaf7-a794-43ec-be3d-3c72aff08400" containerName="extract-utilities" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614549 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4103aaf7-a794-43ec-be3d-3c72aff08400" containerName="extract-utilities" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614569 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4be2641-d6f3-4f86-ac61-e53d94db16c4" containerName="extract-content" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614587 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4be2641-d6f3-4f86-ac61-e53d94db16c4" containerName="extract-content" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614607 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614646 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614668 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614685 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614709 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614727 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614749 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614766 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614786 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614802 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614824 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4103aaf7-a794-43ec-be3d-3c72aff08400" containerName="registry-server" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614840 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4103aaf7-a794-43ec-be3d-3c72aff08400" containerName="registry-server" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614867 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="271ceb24-1e9d-44c5-a8d2-168c2b34d81a" containerName="registry-server" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614884 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="271ceb24-1e9d-44c5-a8d2-168c2b34d81a" containerName="registry-server" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614916 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="271ceb24-1e9d-44c5-a8d2-168c2b34d81a" containerName="extract-content" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614932 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="271ceb24-1e9d-44c5-a8d2-168c2b34d81a" containerName="extract-content" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.614962 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.614978 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.615000 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2bf303-3b5f-4c86-bf7e-eeef12979424" containerName="registry-server" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.615018 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2bf303-3b5f-4c86-bf7e-eeef12979424" containerName="registry-server" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.615044 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4103aaf7-a794-43ec-be3d-3c72aff08400" containerName="extract-content" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.615060 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4103aaf7-a794-43ec-be3d-3c72aff08400" containerName="extract-content" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.615079 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2bf303-3b5f-4c86-bf7e-eeef12979424" containerName="extract-content" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.615095 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2bf303-3b5f-4c86-bf7e-eeef12979424" containerName="extract-content" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.615118 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2bf303-3b5f-4c86-bf7e-eeef12979424" containerName="extract-utilities" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.615137 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2bf303-3b5f-4c86-bf7e-eeef12979424" containerName="extract-utilities" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.615427 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="271ceb24-1e9d-44c5-a8d2-168c2b34d81a" containerName="registry-server" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.615454 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="4103aaf7-a794-43ec-be3d-3c72aff08400" containerName="registry-server" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.615478 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.615501 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.615519 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.615540 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4be2641-d6f3-4f86-ac61-e53d94db16c4" containerName="registry-server" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.615564 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.615589 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.615611 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.615634 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d2bf303-3b5f-4c86-bf7e-eeef12979424" containerName="registry-server" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.616120 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.662025 4981 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.663398 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.666067 4981 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:13 crc kubenswrapper[4981]: E0128 15:07:13.692616 4981 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.697836 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.697908 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.697949 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.698015 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.698061 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.698091 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.698123 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.698154 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.799378 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.799479 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.799512 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.799547 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.799578 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.799581 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.799675 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.799682 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.799736 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.799743 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.799612 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.799814 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.799893 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.799964 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.800163 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.800253 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:13 crc kubenswrapper[4981]: I0128 15:07:13.994775 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:14 crc kubenswrapper[4981]: E0128 15:07:14.034828 4981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eed7f2a91a8ee openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:07:14.034002158 +0000 UTC m=+245.486160439,LastTimestamp:2026-01-28 15:07:14.034002158 +0000 UTC m=+245.486160439,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.401964 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.404875 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.406303 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b" exitCode=0 Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.406362 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4" exitCode=0 Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.406383 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a" exitCode=0 Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.406400 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f" exitCode=2 Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.406413 4981 scope.go:117] "RemoveContainer" containerID="f9bfea94a7faf14a96a0067cc72839275290c34dcdacdf646734a30666b06915" Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.409654 4981 generic.go:334] "Generic (PLEG): container finished" podID="e152bac2-8343-44cd-8df7-659fc89ad725" containerID="c650b9be57db7672536359204d83fe306bed014fce01cc533d8ec42f92dfd9f6" exitCode=0 Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.409747 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e152bac2-8343-44cd-8df7-659fc89ad725","Type":"ContainerDied","Data":"c650b9be57db7672536359204d83fe306bed014fce01cc533d8ec42f92dfd9f6"} Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.410792 4981 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.411294 4981 status_manager.go:851] "Failed to get status for pod" podUID="e152bac2-8343-44cd-8df7-659fc89ad725" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.412765 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"e3d53e30a8fb7aa5c3d8c1dc47b1f770c3b4d4536e7fd00bf9223394426397c3"} Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.412825 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"da1629fbec9cf0a26fcb2416fcacaa4afaf17e8ec0221ebef46366d91315741a"} Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.413850 4981 status_manager.go:851] "Failed to get status for pod" podUID="e152bac2-8343-44cd-8df7-659fc89ad725" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:14 crc kubenswrapper[4981]: E0128 15:07:14.414003 4981 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.414550 4981 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.503078 4981 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 28 15:07:14 crc kubenswrapper[4981]: I0128 15:07:14.503310 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 28 15:07:14 crc kubenswrapper[4981]: E0128 15:07:14.759577 4981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eed7f2a91a8ee openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:07:14.034002158 +0000 UTC m=+245.486160439,LastTimestamp:2026-01-28 15:07:14.034002158 +0000 UTC m=+245.486160439,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:07:15 crc kubenswrapper[4981]: I0128 15:07:15.424178 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:07:15 crc kubenswrapper[4981]: I0128 15:07:15.809477 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:07:15 crc kubenswrapper[4981]: I0128 15:07:15.811373 4981 status_manager.go:851] "Failed to get status for pod" podUID="e152bac2-8343-44cd-8df7-659fc89ad725" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:15 crc kubenswrapper[4981]: I0128 15:07:15.834993 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e152bac2-8343-44cd-8df7-659fc89ad725-kubelet-dir\") pod \"e152bac2-8343-44cd-8df7-659fc89ad725\" (UID: \"e152bac2-8343-44cd-8df7-659fc89ad725\") " Jan 28 15:07:15 crc kubenswrapper[4981]: I0128 15:07:15.835122 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e152bac2-8343-44cd-8df7-659fc89ad725-var-lock\") pod \"e152bac2-8343-44cd-8df7-659fc89ad725\" (UID: \"e152bac2-8343-44cd-8df7-659fc89ad725\") " Jan 28 15:07:15 crc kubenswrapper[4981]: I0128 15:07:15.835172 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e152bac2-8343-44cd-8df7-659fc89ad725-kube-api-access\") pod \"e152bac2-8343-44cd-8df7-659fc89ad725\" (UID: \"e152bac2-8343-44cd-8df7-659fc89ad725\") " Jan 28 15:07:15 crc kubenswrapper[4981]: I0128 15:07:15.835422 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e152bac2-8343-44cd-8df7-659fc89ad725-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e152bac2-8343-44cd-8df7-659fc89ad725" (UID: "e152bac2-8343-44cd-8df7-659fc89ad725"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:07:15 crc kubenswrapper[4981]: I0128 15:07:15.835497 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e152bac2-8343-44cd-8df7-659fc89ad725-var-lock" (OuterVolumeSpecName: "var-lock") pod "e152bac2-8343-44cd-8df7-659fc89ad725" (UID: "e152bac2-8343-44cd-8df7-659fc89ad725"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:07:15 crc kubenswrapper[4981]: I0128 15:07:15.835703 4981 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e152bac2-8343-44cd-8df7-659fc89ad725-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:15 crc kubenswrapper[4981]: I0128 15:07:15.835724 4981 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e152bac2-8343-44cd-8df7-659fc89ad725-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:15 crc kubenswrapper[4981]: I0128 15:07:15.843943 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e152bac2-8343-44cd-8df7-659fc89ad725-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e152bac2-8343-44cd-8df7-659fc89ad725" (UID: "e152bac2-8343-44cd-8df7-659fc89ad725"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:07:15 crc kubenswrapper[4981]: I0128 15:07:15.936755 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e152bac2-8343-44cd-8df7-659fc89ad725-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.006441 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.007649 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.008986 4981 status_manager.go:851] "Failed to get status for pod" podUID="e152bac2-8343-44cd-8df7-659fc89ad725" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.009604 4981 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.037291 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.037356 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.037385 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.037520 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.037568 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.037520 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.139029 4981 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.139086 4981 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.139106 4981 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.432639 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e152bac2-8343-44cd-8df7-659fc89ad725","Type":"ContainerDied","Data":"81c8b981c92a2a832f967f5545a128c99cc7c2754e123af3879e176e732181e0"} Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.432714 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81c8b981c92a2a832f967f5545a128c99cc7c2754e123af3879e176e732181e0" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.432669 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.435937 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.437110 4981 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946" exitCode=0 Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.437173 4981 scope.go:117] "RemoveContainer" containerID="fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.437317 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.456475 4981 scope.go:117] "RemoveContainer" containerID="aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.462623 4981 status_manager.go:851] "Failed to get status for pod" podUID="e152bac2-8343-44cd-8df7-659fc89ad725" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.462889 4981 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.463245 4981 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.463511 4981 status_manager.go:851] "Failed to get status for pod" podUID="e152bac2-8343-44cd-8df7-659fc89ad725" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.482333 4981 scope.go:117] "RemoveContainer" containerID="7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.504794 4981 scope.go:117] "RemoveContainer" containerID="064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.517386 4981 scope.go:117] "RemoveContainer" containerID="58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.533717 4981 scope.go:117] "RemoveContainer" containerID="90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.553996 4981 scope.go:117] "RemoveContainer" containerID="fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b" Jan 28 15:07:16 crc kubenswrapper[4981]: E0128 15:07:16.554528 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\": container with ID starting with fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b not found: ID does not exist" containerID="fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.554576 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b"} err="failed to get container status \"fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\": rpc error: code = NotFound desc = could not find container \"fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b\": container with ID starting with fba843e5bc3b2c9188a09eae7b54ccf3f63e69e9c0b3caf5d0efe44c9d09990b not found: ID does not exist" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.554614 4981 scope.go:117] "RemoveContainer" containerID="aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4" Jan 28 15:07:16 crc kubenswrapper[4981]: E0128 15:07:16.555017 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\": container with ID starting with aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4 not found: ID does not exist" containerID="aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.555047 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4"} err="failed to get container status \"aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\": rpc error: code = NotFound desc = could not find container \"aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4\": container with ID starting with aa0fa427101bad8f38b020403f4ec2d0bbd4b5e3646ca49c7c548569e6ae30e4 not found: ID does not exist" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.555070 4981 scope.go:117] "RemoveContainer" containerID="7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a" Jan 28 15:07:16 crc kubenswrapper[4981]: E0128 15:07:16.555475 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\": container with ID starting with 7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a not found: ID does not exist" containerID="7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.555509 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a"} err="failed to get container status \"7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\": rpc error: code = NotFound desc = could not find container \"7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a\": container with ID starting with 7fa2fff70bf2171d68944792ed9e9b5ca8ce92fd997f916adecc76e237ad3d3a not found: ID does not exist" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.555531 4981 scope.go:117] "RemoveContainer" containerID="064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f" Jan 28 15:07:16 crc kubenswrapper[4981]: E0128 15:07:16.555777 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\": container with ID starting with 064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f not found: ID does not exist" containerID="064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.555811 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f"} err="failed to get container status \"064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\": rpc error: code = NotFound desc = could not find container \"064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f\": container with ID starting with 064e8b36a3426c64352885aa1c4fa2e53a4c2528915600ac570cc80d52b5db1f not found: ID does not exist" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.555833 4981 scope.go:117] "RemoveContainer" containerID="58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946" Jan 28 15:07:16 crc kubenswrapper[4981]: E0128 15:07:16.556160 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\": container with ID starting with 58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946 not found: ID does not exist" containerID="58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.556206 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946"} err="failed to get container status \"58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\": rpc error: code = NotFound desc = could not find container \"58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946\": container with ID starting with 58c12c4bfbb45bc4da5bca8e28ed86d1d715f7f192f485b9641b968a731c1946 not found: ID does not exist" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.556228 4981 scope.go:117] "RemoveContainer" containerID="90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3" Jan 28 15:07:16 crc kubenswrapper[4981]: E0128 15:07:16.556464 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\": container with ID starting with 90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3 not found: ID does not exist" containerID="90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3" Jan 28 15:07:16 crc kubenswrapper[4981]: I0128 15:07:16.556497 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3"} err="failed to get container status \"90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\": rpc error: code = NotFound desc = could not find container \"90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3\": container with ID starting with 90e7ed995cf73c21e12357d9caf7840fbd798b0313d210a9e07ba38af5e78dc3 not found: ID does not exist" Jan 28 15:07:17 crc kubenswrapper[4981]: I0128 15:07:17.325079 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 28 15:07:18 crc kubenswrapper[4981]: E0128 15:07:18.963785 4981 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:18 crc kubenswrapper[4981]: E0128 15:07:18.964840 4981 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:18 crc kubenswrapper[4981]: E0128 15:07:18.965435 4981 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:18 crc kubenswrapper[4981]: E0128 15:07:18.966853 4981 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:18 crc kubenswrapper[4981]: E0128 15:07:18.968071 4981 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:18 crc kubenswrapper[4981]: I0128 15:07:18.968131 4981 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 28 15:07:18 crc kubenswrapper[4981]: E0128 15:07:18.968751 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="200ms" Jan 28 15:07:19 crc kubenswrapper[4981]: E0128 15:07:19.170277 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="400ms" Jan 28 15:07:19 crc kubenswrapper[4981]: I0128 15:07:19.322720 4981 status_manager.go:851] "Failed to get status for pod" podUID="e152bac2-8343-44cd-8df7-659fc89ad725" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:19 crc kubenswrapper[4981]: E0128 15:07:19.571848 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="800ms" Jan 28 15:07:20 crc kubenswrapper[4981]: E0128 15:07:20.372462 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="1.6s" Jan 28 15:07:21 crc kubenswrapper[4981]: E0128 15:07:21.974334 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="3.2s" Jan 28 15:07:24 crc kubenswrapper[4981]: E0128 15:07:24.761224 4981 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.151:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188eed7f2a91a8ee openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 15:07:14.034002158 +0000 UTC m=+245.486160439,LastTimestamp:2026-01-28 15:07:14.034002158 +0000 UTC m=+245.486160439,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 15:07:25 crc kubenswrapper[4981]: E0128 15:07:25.175834 4981 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.151:6443: connect: connection refused" interval="6.4s" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.182198 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" podUID="200de941-a8aa-4930-a959-553869b8a2d0" containerName="oauth-openshift" containerID="cri-o://61a0c2a5bd23f2fda71f61874858bde5400ecb45987d2943dc3767dacb900614" gracePeriod=15 Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.535522 4981 generic.go:334] "Generic (PLEG): container finished" podID="200de941-a8aa-4930-a959-553869b8a2d0" containerID="61a0c2a5bd23f2fda71f61874858bde5400ecb45987d2943dc3767dacb900614" exitCode=0 Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.535667 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" event={"ID":"200de941-a8aa-4930-a959-553869b8a2d0","Type":"ContainerDied","Data":"61a0c2a5bd23f2fda71f61874858bde5400ecb45987d2943dc3767dacb900614"} Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.620481 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.620884 4981 status_manager.go:851] "Failed to get status for pod" podUID="e152bac2-8343-44cd-8df7-659fc89ad725" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.621336 4981 status_manager.go:851] "Failed to get status for pod" podUID="200de941-a8aa-4930-a959-553869b8a2d0" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lm8cf\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.693691 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-trusted-ca-bundle\") pod \"200de941-a8aa-4930-a959-553869b8a2d0\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.693735 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-service-ca\") pod \"200de941-a8aa-4930-a959-553869b8a2d0\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.693775 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/200de941-a8aa-4930-a959-553869b8a2d0-audit-dir\") pod \"200de941-a8aa-4930-a959-553869b8a2d0\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.693804 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-session\") pod \"200de941-a8aa-4930-a959-553869b8a2d0\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.693824 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-cliconfig\") pod \"200de941-a8aa-4930-a959-553869b8a2d0\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.693845 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-ocp-branding-template\") pod \"200de941-a8aa-4930-a959-553869b8a2d0\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.693864 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-login\") pod \"200de941-a8aa-4930-a959-553869b8a2d0\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.693878 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-audit-policies\") pod \"200de941-a8aa-4930-a959-553869b8a2d0\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.693900 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-provider-selection\") pod \"200de941-a8aa-4930-a959-553869b8a2d0\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.693928 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p6lk\" (UniqueName: \"kubernetes.io/projected/200de941-a8aa-4930-a959-553869b8a2d0-kube-api-access-8p6lk\") pod \"200de941-a8aa-4930-a959-553869b8a2d0\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.693942 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-router-certs\") pod \"200de941-a8aa-4930-a959-553869b8a2d0\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.693958 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-serving-cert\") pod \"200de941-a8aa-4930-a959-553869b8a2d0\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.693974 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-idp-0-file-data\") pod \"200de941-a8aa-4930-a959-553869b8a2d0\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.693991 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-error\") pod \"200de941-a8aa-4930-a959-553869b8a2d0\" (UID: \"200de941-a8aa-4930-a959-553869b8a2d0\") " Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.694020 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/200de941-a8aa-4930-a959-553869b8a2d0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "200de941-a8aa-4930-a959-553869b8a2d0" (UID: "200de941-a8aa-4930-a959-553869b8a2d0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.694395 4981 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/200de941-a8aa-4930-a959-553869b8a2d0-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.695378 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "200de941-a8aa-4930-a959-553869b8a2d0" (UID: "200de941-a8aa-4930-a959-553869b8a2d0"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.695732 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "200de941-a8aa-4930-a959-553869b8a2d0" (UID: "200de941-a8aa-4930-a959-553869b8a2d0"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.696022 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "200de941-a8aa-4930-a959-553869b8a2d0" (UID: "200de941-a8aa-4930-a959-553869b8a2d0"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.696041 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "200de941-a8aa-4930-a959-553869b8a2d0" (UID: "200de941-a8aa-4930-a959-553869b8a2d0"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.701395 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "200de941-a8aa-4930-a959-553869b8a2d0" (UID: "200de941-a8aa-4930-a959-553869b8a2d0"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.701814 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200de941-a8aa-4930-a959-553869b8a2d0-kube-api-access-8p6lk" (OuterVolumeSpecName: "kube-api-access-8p6lk") pod "200de941-a8aa-4930-a959-553869b8a2d0" (UID: "200de941-a8aa-4930-a959-553869b8a2d0"). InnerVolumeSpecName "kube-api-access-8p6lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.702583 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "200de941-a8aa-4930-a959-553869b8a2d0" (UID: "200de941-a8aa-4930-a959-553869b8a2d0"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.702950 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "200de941-a8aa-4930-a959-553869b8a2d0" (UID: "200de941-a8aa-4930-a959-553869b8a2d0"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.702941 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "200de941-a8aa-4930-a959-553869b8a2d0" (UID: "200de941-a8aa-4930-a959-553869b8a2d0"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.703117 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "200de941-a8aa-4930-a959-553869b8a2d0" (UID: "200de941-a8aa-4930-a959-553869b8a2d0"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.703690 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "200de941-a8aa-4930-a959-553869b8a2d0" (UID: "200de941-a8aa-4930-a959-553869b8a2d0"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.703692 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "200de941-a8aa-4930-a959-553869b8a2d0" (UID: "200de941-a8aa-4930-a959-553869b8a2d0"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.704274 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "200de941-a8aa-4930-a959-553869b8a2d0" (UID: "200de941-a8aa-4930-a959-553869b8a2d0"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.795727 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.795799 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.795823 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.795845 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.795868 4981 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.795888 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.795910 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p6lk\" (UniqueName: \"kubernetes.io/projected/200de941-a8aa-4930-a959-553869b8a2d0-kube-api-access-8p6lk\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.795930 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.795947 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.795966 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.795985 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.796004 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:25 crc kubenswrapper[4981]: I0128 15:07:25.796022 4981 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/200de941-a8aa-4930-a959-553869b8a2d0-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:07:26 crc kubenswrapper[4981]: I0128 15:07:26.318654 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:26 crc kubenswrapper[4981]: I0128 15:07:26.319534 4981 status_manager.go:851] "Failed to get status for pod" podUID="200de941-a8aa-4930-a959-553869b8a2d0" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lm8cf\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:26 crc kubenswrapper[4981]: I0128 15:07:26.319939 4981 status_manager.go:851] "Failed to get status for pod" podUID="e152bac2-8343-44cd-8df7-659fc89ad725" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:26 crc kubenswrapper[4981]: I0128 15:07:26.341452 4981 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1f1b26ee-5569-4a25-851d-f1e23f13870a" Jan 28 15:07:26 crc kubenswrapper[4981]: I0128 15:07:26.341502 4981 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1f1b26ee-5569-4a25-851d-f1e23f13870a" Jan 28 15:07:26 crc kubenswrapper[4981]: E0128 15:07:26.342153 4981 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:26 crc kubenswrapper[4981]: I0128 15:07:26.343072 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:26 crc kubenswrapper[4981]: W0128 15:07:26.388543 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-327b8727e6410c840433f04cbd53d034521e525fd293018fa7ab5b3697234213 WatchSource:0}: Error finding container 327b8727e6410c840433f04cbd53d034521e525fd293018fa7ab5b3697234213: Status 404 returned error can't find the container with id 327b8727e6410c840433f04cbd53d034521e525fd293018fa7ab5b3697234213 Jan 28 15:07:26 crc kubenswrapper[4981]: I0128 15:07:26.544670 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" event={"ID":"200de941-a8aa-4930-a959-553869b8a2d0","Type":"ContainerDied","Data":"dc1a18a58244c1e2deae9c36ed9771841c97e8d14d067ff804171360ff777150"} Jan 28 15:07:26 crc kubenswrapper[4981]: I0128 15:07:26.544737 4981 scope.go:117] "RemoveContainer" containerID="61a0c2a5bd23f2fda71f61874858bde5400ecb45987d2943dc3767dacb900614" Jan 28 15:07:26 crc kubenswrapper[4981]: I0128 15:07:26.544737 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" Jan 28 15:07:26 crc kubenswrapper[4981]: I0128 15:07:26.545525 4981 status_manager.go:851] "Failed to get status for pod" podUID="e152bac2-8343-44cd-8df7-659fc89ad725" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:26 crc kubenswrapper[4981]: I0128 15:07:26.545843 4981 status_manager.go:851] "Failed to get status for pod" podUID="200de941-a8aa-4930-a959-553869b8a2d0" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lm8cf\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:26 crc kubenswrapper[4981]: I0128 15:07:26.546468 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"327b8727e6410c840433f04cbd53d034521e525fd293018fa7ab5b3697234213"} Jan 28 15:07:26 crc kubenswrapper[4981]: I0128 15:07:26.572546 4981 status_manager.go:851] "Failed to get status for pod" podUID="e152bac2-8343-44cd-8df7-659fc89ad725" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:26 crc kubenswrapper[4981]: I0128 15:07:26.573048 4981 status_manager.go:851] "Failed to get status for pod" podUID="200de941-a8aa-4930-a959-553869b8a2d0" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lm8cf\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:27 crc kubenswrapper[4981]: I0128 15:07:27.555053 4981 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="b5479d90d1e96ccf270f02ab886b089328e0009cf108228a1e3359943e286812" exitCode=0 Jan 28 15:07:27 crc kubenswrapper[4981]: I0128 15:07:27.555159 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"b5479d90d1e96ccf270f02ab886b089328e0009cf108228a1e3359943e286812"} Jan 28 15:07:27 crc kubenswrapper[4981]: I0128 15:07:27.555509 4981 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1f1b26ee-5569-4a25-851d-f1e23f13870a" Jan 28 15:07:27 crc kubenswrapper[4981]: I0128 15:07:27.555537 4981 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1f1b26ee-5569-4a25-851d-f1e23f13870a" Jan 28 15:07:27 crc kubenswrapper[4981]: I0128 15:07:27.556010 4981 status_manager.go:851] "Failed to get status for pod" podUID="200de941-a8aa-4930-a959-553869b8a2d0" pod="openshift-authentication/oauth-openshift-558db77b4-lm8cf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lm8cf\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:27 crc kubenswrapper[4981]: E0128 15:07:27.556014 4981 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:27 crc kubenswrapper[4981]: I0128 15:07:27.556357 4981 status_manager.go:851] "Failed to get status for pod" podUID="e152bac2-8343-44cd-8df7-659fc89ad725" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.151:6443: connect: connection refused" Jan 28 15:07:28 crc kubenswrapper[4981]: I0128 15:07:28.571018 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 15:07:28 crc kubenswrapper[4981]: I0128 15:07:28.571485 4981 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e" exitCode=1 Jan 28 15:07:28 crc kubenswrapper[4981]: I0128 15:07:28.571561 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e"} Jan 28 15:07:28 crc kubenswrapper[4981]: I0128 15:07:28.572114 4981 scope.go:117] "RemoveContainer" containerID="5fcf41cde28cc422b596ff8cdb3426ac9237e01f957f78682b36494a7046fd6e" Jan 28 15:07:28 crc kubenswrapper[4981]: I0128 15:07:28.576521 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4ad6dbdc4f6a67a831de2f7ddb48886126fccaf54b98f70a7cd34858e7094e98"} Jan 28 15:07:28 crc kubenswrapper[4981]: I0128 15:07:28.576574 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ea845d2569a4283209d877087b1ceaab9079720e4f79907070391d081438c877"} Jan 28 15:07:28 crc kubenswrapper[4981]: I0128 15:07:28.576591 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8a2de744e19392803f6df785c41e1488156af49cab7ea9006d35b12e11750c99"} Jan 28 15:07:29 crc kubenswrapper[4981]: I0128 15:07:29.584458 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f8c0fb681773e825151d7db3eed3ebfcae367dfaa1443a54af88568089e8d14e"} Jan 28 15:07:29 crc kubenswrapper[4981]: I0128 15:07:29.584887 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"aafdbd4b4ebd846b5022bd8460798ab203cc2bbcb6495053bc8435065dc3cfb8"} Jan 28 15:07:29 crc kubenswrapper[4981]: I0128 15:07:29.584915 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:29 crc kubenswrapper[4981]: I0128 15:07:29.584748 4981 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1f1b26ee-5569-4a25-851d-f1e23f13870a" Jan 28 15:07:29 crc kubenswrapper[4981]: I0128 15:07:29.584941 4981 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1f1b26ee-5569-4a25-851d-f1e23f13870a" Jan 28 15:07:29 crc kubenswrapper[4981]: I0128 15:07:29.587210 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 15:07:29 crc kubenswrapper[4981]: I0128 15:07:29.587272 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b87aaadc516617074e9f1decaadc419300e883cc873bf92ec64a35231701e1f8"} Jan 28 15:07:31 crc kubenswrapper[4981]: I0128 15:07:31.343166 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:31 crc kubenswrapper[4981]: I0128 15:07:31.343585 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:31 crc kubenswrapper[4981]: I0128 15:07:31.350747 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:34 crc kubenswrapper[4981]: I0128 15:07:34.587344 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:07:34 crc kubenswrapper[4981]: I0128 15:07:34.591627 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:07:34 crc kubenswrapper[4981]: I0128 15:07:34.603244 4981 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:34 crc kubenswrapper[4981]: I0128 15:07:34.615745 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:07:35 crc kubenswrapper[4981]: I0128 15:07:35.623454 4981 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1f1b26ee-5569-4a25-851d-f1e23f13870a" Jan 28 15:07:35 crc kubenswrapper[4981]: I0128 15:07:35.623534 4981 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1f1b26ee-5569-4a25-851d-f1e23f13870a" Jan 28 15:07:35 crc kubenswrapper[4981]: I0128 15:07:35.629596 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:35 crc kubenswrapper[4981]: I0128 15:07:35.633973 4981 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b11b41f9-a706-4275-a177-df8e60d1d3fe" Jan 28 15:07:36 crc kubenswrapper[4981]: I0128 15:07:36.629592 4981 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1f1b26ee-5569-4a25-851d-f1e23f13870a" Jan 28 15:07:36 crc kubenswrapper[4981]: I0128 15:07:36.629646 4981 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="1f1b26ee-5569-4a25-851d-f1e23f13870a" Jan 28 15:07:39 crc kubenswrapper[4981]: I0128 15:07:39.343875 4981 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b11b41f9-a706-4275-a177-df8e60d1d3fe" Jan 28 15:07:44 crc kubenswrapper[4981]: I0128 15:07:44.538410 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 15:07:44 crc kubenswrapper[4981]: I0128 15:07:44.816785 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 15:07:45 crc kubenswrapper[4981]: I0128 15:07:45.036577 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 15:07:45 crc kubenswrapper[4981]: I0128 15:07:45.218464 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 15:07:45 crc kubenswrapper[4981]: I0128 15:07:45.497467 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 15:07:45 crc kubenswrapper[4981]: I0128 15:07:45.938223 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 15:07:46 crc kubenswrapper[4981]: I0128 15:07:46.101628 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 15:07:46 crc kubenswrapper[4981]: I0128 15:07:46.128101 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 15:07:46 crc kubenswrapper[4981]: I0128 15:07:46.260279 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 15:07:46 crc kubenswrapper[4981]: I0128 15:07:46.262134 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 15:07:46 crc kubenswrapper[4981]: I0128 15:07:46.370393 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:07:46 crc kubenswrapper[4981]: I0128 15:07:46.470343 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 15:07:46 crc kubenswrapper[4981]: I0128 15:07:46.526055 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 15:07:46 crc kubenswrapper[4981]: I0128 15:07:46.574853 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 15:07:46 crc kubenswrapper[4981]: I0128 15:07:46.581005 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 15:07:46 crc kubenswrapper[4981]: I0128 15:07:46.837281 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 15:07:47 crc kubenswrapper[4981]: I0128 15:07:47.034220 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 15:07:47 crc kubenswrapper[4981]: I0128 15:07:47.119335 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 15:07:47 crc kubenswrapper[4981]: I0128 15:07:47.252846 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 15:07:47 crc kubenswrapper[4981]: I0128 15:07:47.600211 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 15:07:47 crc kubenswrapper[4981]: I0128 15:07:47.628812 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 15:07:47 crc kubenswrapper[4981]: I0128 15:07:47.803113 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 15:07:47 crc kubenswrapper[4981]: I0128 15:07:47.829379 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 15:07:48 crc kubenswrapper[4981]: I0128 15:07:48.091514 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 15:07:48 crc kubenswrapper[4981]: I0128 15:07:48.351032 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 15:07:48 crc kubenswrapper[4981]: I0128 15:07:48.386239 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 15:07:48 crc kubenswrapper[4981]: I0128 15:07:48.446207 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 15:07:48 crc kubenswrapper[4981]: I0128 15:07:48.465671 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 15:07:48 crc kubenswrapper[4981]: I0128 15:07:48.468126 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 15:07:48 crc kubenswrapper[4981]: I0128 15:07:48.468581 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 15:07:48 crc kubenswrapper[4981]: I0128 15:07:48.475451 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 15:07:48 crc kubenswrapper[4981]: I0128 15:07:48.524549 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:07:48 crc kubenswrapper[4981]: I0128 15:07:48.538657 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 15:07:48 crc kubenswrapper[4981]: I0128 15:07:48.554728 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.034652 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.034722 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.035086 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.037069 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.041515 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.107397 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.123827 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.163010 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.169668 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.360963 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.444994 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.645752 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.664023 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.716780 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.789906 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.879995 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.948658 4981 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 15:07:49 crc kubenswrapper[4981]: I0128 15:07:49.969552 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.020618 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.021159 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.023863 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.028109 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.057418 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.140243 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.146667 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.233756 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.280923 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.287519 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.391626 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.427302 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.451170 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.453808 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.471458 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.679350 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.686906 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.699506 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.748468 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.784329 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.829363 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.830964 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.831556 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.857068 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.904235 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.937495 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.959614 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 15:07:50 crc kubenswrapper[4981]: I0128 15:07:50.978028 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 15:07:51 crc kubenswrapper[4981]: I0128 15:07:51.083497 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 15:07:51 crc kubenswrapper[4981]: I0128 15:07:51.141280 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 15:07:51 crc kubenswrapper[4981]: I0128 15:07:51.395957 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 15:07:51 crc kubenswrapper[4981]: I0128 15:07:51.402009 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:07:51 crc kubenswrapper[4981]: I0128 15:07:51.512109 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 15:07:51 crc kubenswrapper[4981]: I0128 15:07:51.537973 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 15:07:51 crc kubenswrapper[4981]: I0128 15:07:51.556471 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 15:07:51 crc kubenswrapper[4981]: I0128 15:07:51.557482 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 15:07:51 crc kubenswrapper[4981]: I0128 15:07:51.653162 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 15:07:51 crc kubenswrapper[4981]: I0128 15:07:51.722983 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 15:07:51 crc kubenswrapper[4981]: I0128 15:07:51.891325 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 15:07:51 crc kubenswrapper[4981]: I0128 15:07:51.979276 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 15:07:52 crc kubenswrapper[4981]: I0128 15:07:52.088541 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 15:07:52 crc kubenswrapper[4981]: I0128 15:07:52.184710 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 15:07:52 crc kubenswrapper[4981]: I0128 15:07:52.250120 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 15:07:52 crc kubenswrapper[4981]: I0128 15:07:52.380280 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 15:07:52 crc kubenswrapper[4981]: I0128 15:07:52.384276 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 15:07:52 crc kubenswrapper[4981]: I0128 15:07:52.460343 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 15:07:52 crc kubenswrapper[4981]: I0128 15:07:52.579604 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 15:07:52 crc kubenswrapper[4981]: I0128 15:07:52.667511 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 15:07:52 crc kubenswrapper[4981]: I0128 15:07:52.761980 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 15:07:52 crc kubenswrapper[4981]: I0128 15:07:52.876342 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 15:07:52 crc kubenswrapper[4981]: I0128 15:07:52.877251 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 15:07:52 crc kubenswrapper[4981]: I0128 15:07:52.961143 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.010589 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.037479 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.153940 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.195770 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.195855 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.230658 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.241604 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.270742 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.316605 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.371839 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.388584 4981 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.474353 4981 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.479275 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-lm8cf"] Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.479336 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.484003 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.491655 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.523010 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.52299142 podStartE2EDuration="19.52299142s" podCreationTimestamp="2026-01-28 15:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:07:53.500449299 +0000 UTC m=+284.952607540" watchObservedRunningTime="2026-01-28 15:07:53.52299142 +0000 UTC m=+284.975149661" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.549403 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.631245 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.637927 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.662873 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.670716 4981 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.689853 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.719807 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.741935 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.756960 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.770169 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.796741 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.837527 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.855012 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.893766 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 15:07:53 crc kubenswrapper[4981]: I0128 15:07:53.921478 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 15:07:54 crc kubenswrapper[4981]: I0128 15:07:54.030075 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 15:07:54 crc kubenswrapper[4981]: I0128 15:07:54.061922 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 15:07:54 crc kubenswrapper[4981]: I0128 15:07:54.083056 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 15:07:54 crc kubenswrapper[4981]: I0128 15:07:54.109622 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 15:07:54 crc kubenswrapper[4981]: I0128 15:07:54.363287 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 15:07:54 crc kubenswrapper[4981]: I0128 15:07:54.407279 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 15:07:54 crc kubenswrapper[4981]: I0128 15:07:54.607373 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:07:54 crc kubenswrapper[4981]: I0128 15:07:54.609727 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 15:07:54 crc kubenswrapper[4981]: I0128 15:07:54.757523 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 15:07:54 crc kubenswrapper[4981]: I0128 15:07:54.795593 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:07:54 crc kubenswrapper[4981]: I0128 15:07:54.806861 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 15:07:54 crc kubenswrapper[4981]: I0128 15:07:54.922888 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 15:07:54 crc kubenswrapper[4981]: I0128 15:07:54.937178 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.134881 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.225512 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.285794 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.325809 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.331706 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="200de941-a8aa-4930-a959-553869b8a2d0" path="/var/lib/kubelet/pods/200de941-a8aa-4930-a959-553869b8a2d0/volumes" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.343396 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-8495797ccf-qsjgt"] Jan 28 15:07:55 crc kubenswrapper[4981]: E0128 15:07:55.343704 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200de941-a8aa-4930-a959-553869b8a2d0" containerName="oauth-openshift" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.343734 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="200de941-a8aa-4930-a959-553869b8a2d0" containerName="oauth-openshift" Jan 28 15:07:55 crc kubenswrapper[4981]: E0128 15:07:55.343755 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e152bac2-8343-44cd-8df7-659fc89ad725" containerName="installer" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.343768 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e152bac2-8343-44cd-8df7-659fc89ad725" containerName="installer" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.343953 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="200de941-a8aa-4930-a959-553869b8a2d0" containerName="oauth-openshift" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.343979 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e152bac2-8343-44cd-8df7-659fc89ad725" containerName="installer" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.344674 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.351922 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.352183 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.352411 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.352588 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.353241 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.354524 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.354658 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.354663 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.354692 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.354716 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.354903 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.355707 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.368814 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.368999 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.374695 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.377215 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.411520 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.470236 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.493866 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.503961 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c784431-f2b5-4ad8-876c-bcf5a675928d-audit-dir\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.504005 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w6zj\" (UniqueName: \"kubernetes.io/projected/9c784431-f2b5-4ad8-876c-bcf5a675928d-kube-api-access-5w6zj\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.504029 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c784431-f2b5-4ad8-876c-bcf5a675928d-audit-policies\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.504046 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.504063 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.504083 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-session\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.504099 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.504339 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.504359 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-service-ca\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.504389 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-user-template-error\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.504411 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.504449 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.504481 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-user-template-login\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.504501 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-router-certs\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.605860 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.606000 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-user-template-login\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.606062 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-router-certs\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.606134 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c784431-f2b5-4ad8-876c-bcf5a675928d-audit-dir\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.606179 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w6zj\" (UniqueName: \"kubernetes.io/projected/9c784431-f2b5-4ad8-876c-bcf5a675928d-kube-api-access-5w6zj\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.606262 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c784431-f2b5-4ad8-876c-bcf5a675928d-audit-policies\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.606318 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.606370 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.606423 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-session\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.606472 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.606536 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.606703 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-service-ca\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.606815 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-user-template-error\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.606366 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9c784431-f2b5-4ad8-876c-bcf5a675928d-audit-dir\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.606881 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.607663 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9c784431-f2b5-4ad8-876c-bcf5a675928d-audit-policies\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.608102 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-service-ca\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.608782 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.608771 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.612788 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-user-template-error\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.613457 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-router-certs\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.613849 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-user-template-login\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.614303 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.614852 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.615022 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.615045 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-session\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.615837 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c784431-f2b5-4ad8-876c-bcf5a675928d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.636448 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w6zj\" (UniqueName: \"kubernetes.io/projected/9c784431-f2b5-4ad8-876c-bcf5a675928d-kube-api-access-5w6zj\") pod \"oauth-openshift-8495797ccf-qsjgt\" (UID: \"9c784431-f2b5-4ad8-876c-bcf5a675928d\") " pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.669729 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.727280 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.804647 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.833309 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.845634 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.852794 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.910978 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 15:07:55 crc kubenswrapper[4981]: I0128 15:07:55.939578 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.007643 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.048606 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.077273 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.078583 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.153496 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.156618 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.228361 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.268857 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.337751 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.346524 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.351702 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.429509 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.474547 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.593392 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.716255 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.738455 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.883624 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.884572 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.915057 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.932912 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.936339 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.948515 4981 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 15:07:56 crc kubenswrapper[4981]: I0128 15:07:56.953606 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.026626 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.185637 4981 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.185899 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://e3d53e30a8fb7aa5c3d8c1dc47b1f770c3b4d4536e7fd00bf9223394426397c3" gracePeriod=5 Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.208463 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.214909 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.247139 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.345083 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.407153 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.472137 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.540349 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.605106 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.660906 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.680302 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.773776 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.784873 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 15:07:57 crc kubenswrapper[4981]: I0128 15:07:57.950162 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8495797ccf-qsjgt"] Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.073653 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.120253 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.136435 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.239966 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.255599 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.260105 4981 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.264730 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.280692 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.305048 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.317568 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8495797ccf-qsjgt"] Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.352056 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.421619 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.606929 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.610136 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.640351 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.666437 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.741997 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.770474 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" event={"ID":"9c784431-f2b5-4ad8-876c-bcf5a675928d","Type":"ContainerStarted","Data":"cd41979092a83f93efeb2e4f27b700455e942cfd70ad4bbdaffe4edde4473417"} Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.770526 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" event={"ID":"9c784431-f2b5-4ad8-876c-bcf5a675928d","Type":"ContainerStarted","Data":"cb568f20a7309b7b4ad0f8e019f8b8e636405c39bc0e51cec942ea299d1a2a13"} Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.772516 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.805700 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.824557 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.883568 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.885350 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.909653 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.934396 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-8495797ccf-qsjgt" podStartSLOduration=58.934360319 podStartE2EDuration="58.934360319s" podCreationTimestamp="2026-01-28 15:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:07:58.803439941 +0000 UTC m=+290.255598182" watchObservedRunningTime="2026-01-28 15:07:58.934360319 +0000 UTC m=+290.386518600" Jan 28 15:07:58 crc kubenswrapper[4981]: I0128 15:07:58.949556 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 15:07:59 crc kubenswrapper[4981]: I0128 15:07:59.051428 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 15:07:59 crc kubenswrapper[4981]: I0128 15:07:59.118914 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 15:07:59 crc kubenswrapper[4981]: I0128 15:07:59.458904 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 15:07:59 crc kubenswrapper[4981]: I0128 15:07:59.468716 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 15:07:59 crc kubenswrapper[4981]: I0128 15:07:59.486163 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 15:07:59 crc kubenswrapper[4981]: I0128 15:07:59.528861 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 15:07:59 crc kubenswrapper[4981]: I0128 15:07:59.536346 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 15:07:59 crc kubenswrapper[4981]: I0128 15:07:59.660829 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 15:07:59 crc kubenswrapper[4981]: I0128 15:07:59.667129 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 15:07:59 crc kubenswrapper[4981]: I0128 15:07:59.727174 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:07:59 crc kubenswrapper[4981]: I0128 15:07:59.934418 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:07:59 crc kubenswrapper[4981]: I0128 15:07:59.958562 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 15:08:00 crc kubenswrapper[4981]: I0128 15:08:00.032446 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 15:08:00 crc kubenswrapper[4981]: I0128 15:08:00.301681 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 15:08:00 crc kubenswrapper[4981]: I0128 15:08:00.312069 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 15:08:01 crc kubenswrapper[4981]: I0128 15:08:01.047124 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 15:08:01 crc kubenswrapper[4981]: I0128 15:08:01.115474 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 15:08:01 crc kubenswrapper[4981]: I0128 15:08:01.481401 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 15:08:01 crc kubenswrapper[4981]: I0128 15:08:01.634153 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.518320 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.794171 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.794419 4981 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="e3d53e30a8fb7aa5c3d8c1dc47b1f770c3b4d4536e7fd00bf9223394426397c3" exitCode=137 Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.794561 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da1629fbec9cf0a26fcb2416fcacaa4afaf17e8ec0221ebef46366d91315741a" Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.794980 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.795094 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.905497 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.905631 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.905784 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.905808 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.905964 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.906097 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.906047 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.906213 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.906297 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.906898 4981 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.906946 4981 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.906986 4981 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.907002 4981 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.915304 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 15:08:02 crc kubenswrapper[4981]: I0128 15:08:02.915747 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:08:03 crc kubenswrapper[4981]: I0128 15:08:03.008791 4981 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:03 crc kubenswrapper[4981]: I0128 15:08:03.324128 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 28 15:08:03 crc kubenswrapper[4981]: I0128 15:08:03.799042 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 15:08:09 crc kubenswrapper[4981]: I0128 15:08:09.071943 4981 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 28 15:08:17 crc kubenswrapper[4981]: I0128 15:08:17.895473 4981 generic.go:334] "Generic (PLEG): container finished" podID="a2889542-ffb9-4af8-8f77-ccfd601dec88" containerID="7833d7e5892341d81930b3592d51102624eb31f053a148cbdc058abc4de2cb5e" exitCode=0 Jan 28 15:08:17 crc kubenswrapper[4981]: I0128 15:08:17.895592 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" event={"ID":"a2889542-ffb9-4af8-8f77-ccfd601dec88","Type":"ContainerDied","Data":"7833d7e5892341d81930b3592d51102624eb31f053a148cbdc058abc4de2cb5e"} Jan 28 15:08:17 crc kubenswrapper[4981]: I0128 15:08:17.897254 4981 scope.go:117] "RemoveContainer" containerID="7833d7e5892341d81930b3592d51102624eb31f053a148cbdc058abc4de2cb5e" Jan 28 15:08:18 crc kubenswrapper[4981]: I0128 15:08:18.903568 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" event={"ID":"a2889542-ffb9-4af8-8f77-ccfd601dec88","Type":"ContainerStarted","Data":"8fd2dad8b1bbd6c41ea3886470f4050c51bc2f180988277d4765f595e236556d"} Jan 28 15:08:18 crc kubenswrapper[4981]: I0128 15:08:18.903897 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:08:18 crc kubenswrapper[4981]: I0128 15:08:18.905934 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:08:28 crc kubenswrapper[4981]: I0128 15:08:28.873605 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc28t"] Jan 28 15:08:28 crc kubenswrapper[4981]: I0128 15:08:28.874334 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" podUID="c0008f36-c407-4f88-9da3-55a32f23bf4d" containerName="controller-manager" containerID="cri-o://4dfe031091dda0dc6a7f00faea9dc75cd3e0a8ab2039932bfd5b02e580503193" gracePeriod=30 Jan 28 15:08:28 crc kubenswrapper[4981]: I0128 15:08:28.973788 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j"] Jan 28 15:08:28 crc kubenswrapper[4981]: I0128 15:08:28.974013 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" podUID="f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70" containerName="route-controller-manager" containerID="cri-o://c6cf6d530c76ca558c172a358a4ecbabeedb36f9d6ec144c2c2918c0fac05835" gracePeriod=30 Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.234503 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.292476 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.372692 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-client-ca\") pod \"c0008f36-c407-4f88-9da3-55a32f23bf4d\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.372741 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0008f36-c407-4f88-9da3-55a32f23bf4d-serving-cert\") pod \"c0008f36-c407-4f88-9da3-55a32f23bf4d\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.372775 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzqp6\" (UniqueName: \"kubernetes.io/projected/c0008f36-c407-4f88-9da3-55a32f23bf4d-kube-api-access-kzqp6\") pod \"c0008f36-c407-4f88-9da3-55a32f23bf4d\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.372820 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-config\") pod \"c0008f36-c407-4f88-9da3-55a32f23bf4d\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.372913 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-proxy-ca-bundles\") pod \"c0008f36-c407-4f88-9da3-55a32f23bf4d\" (UID: \"c0008f36-c407-4f88-9da3-55a32f23bf4d\") " Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.373598 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-client-ca" (OuterVolumeSpecName: "client-ca") pod "c0008f36-c407-4f88-9da3-55a32f23bf4d" (UID: "c0008f36-c407-4f88-9da3-55a32f23bf4d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.373968 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-config" (OuterVolumeSpecName: "config") pod "c0008f36-c407-4f88-9da3-55a32f23bf4d" (UID: "c0008f36-c407-4f88-9da3-55a32f23bf4d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.374177 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c0008f36-c407-4f88-9da3-55a32f23bf4d" (UID: "c0008f36-c407-4f88-9da3-55a32f23bf4d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.378599 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0008f36-c407-4f88-9da3-55a32f23bf4d-kube-api-access-kzqp6" (OuterVolumeSpecName: "kube-api-access-kzqp6") pod "c0008f36-c407-4f88-9da3-55a32f23bf4d" (UID: "c0008f36-c407-4f88-9da3-55a32f23bf4d"). InnerVolumeSpecName "kube-api-access-kzqp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.378685 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0008f36-c407-4f88-9da3-55a32f23bf4d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c0008f36-c407-4f88-9da3-55a32f23bf4d" (UID: "c0008f36-c407-4f88-9da3-55a32f23bf4d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.474173 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-client-ca\") pod \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.474262 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-serving-cert\") pod \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.474313 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56ksn\" (UniqueName: \"kubernetes.io/projected/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-kube-api-access-56ksn\") pod \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.474372 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-config\") pod \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\" (UID: \"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70\") " Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.474596 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.474613 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.474622 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c0008f36-c407-4f88-9da3-55a32f23bf4d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.474631 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzqp6\" (UniqueName: \"kubernetes.io/projected/c0008f36-c407-4f88-9da3-55a32f23bf4d-kube-api-access-kzqp6\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.474640 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0008f36-c407-4f88-9da3-55a32f23bf4d-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.475755 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-client-ca" (OuterVolumeSpecName: "client-ca") pod "f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70" (UID: "f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.476079 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-config" (OuterVolumeSpecName: "config") pod "f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70" (UID: "f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.477776 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70" (UID: "f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.477810 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-kube-api-access-56ksn" (OuterVolumeSpecName: "kube-api-access-56ksn") pod "f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70" (UID: "f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70"). InnerVolumeSpecName "kube-api-access-56ksn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.576322 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56ksn\" (UniqueName: \"kubernetes.io/projected/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-kube-api-access-56ksn\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.576386 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.576398 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.576410 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.970358 4981 generic.go:334] "Generic (PLEG): container finished" podID="c0008f36-c407-4f88-9da3-55a32f23bf4d" containerID="4dfe031091dda0dc6a7f00faea9dc75cd3e0a8ab2039932bfd5b02e580503193" exitCode=0 Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.970456 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.970463 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" event={"ID":"c0008f36-c407-4f88-9da3-55a32f23bf4d","Type":"ContainerDied","Data":"4dfe031091dda0dc6a7f00faea9dc75cd3e0a8ab2039932bfd5b02e580503193"} Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.970588 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-lc28t" event={"ID":"c0008f36-c407-4f88-9da3-55a32f23bf4d","Type":"ContainerDied","Data":"14db3061d60b528e8b8b57b0931b3f7ec301ebe67cb57e1840b435e7975f429b"} Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.970639 4981 scope.go:117] "RemoveContainer" containerID="4dfe031091dda0dc6a7f00faea9dc75cd3e0a8ab2039932bfd5b02e580503193" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.974510 4981 generic.go:334] "Generic (PLEG): container finished" podID="f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70" containerID="c6cf6d530c76ca558c172a358a4ecbabeedb36f9d6ec144c2c2918c0fac05835" exitCode=0 Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.974589 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" event={"ID":"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70","Type":"ContainerDied","Data":"c6cf6d530c76ca558c172a358a4ecbabeedb36f9d6ec144c2c2918c0fac05835"} Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.974664 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" event={"ID":"f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70","Type":"ContainerDied","Data":"ba012dd36a26a894b1ad83b96a26aaa539bb4af3d91529f6777d9aa252aa00e4"} Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.974711 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j" Jan 28 15:08:29 crc kubenswrapper[4981]: I0128 15:08:29.999320 4981 scope.go:117] "RemoveContainer" containerID="4dfe031091dda0dc6a7f00faea9dc75cd3e0a8ab2039932bfd5b02e580503193" Jan 28 15:08:30 crc kubenswrapper[4981]: E0128 15:08:30.000022 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dfe031091dda0dc6a7f00faea9dc75cd3e0a8ab2039932bfd5b02e580503193\": container with ID starting with 4dfe031091dda0dc6a7f00faea9dc75cd3e0a8ab2039932bfd5b02e580503193 not found: ID does not exist" containerID="4dfe031091dda0dc6a7f00faea9dc75cd3e0a8ab2039932bfd5b02e580503193" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.000085 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dfe031091dda0dc6a7f00faea9dc75cd3e0a8ab2039932bfd5b02e580503193"} err="failed to get container status \"4dfe031091dda0dc6a7f00faea9dc75cd3e0a8ab2039932bfd5b02e580503193\": rpc error: code = NotFound desc = could not find container \"4dfe031091dda0dc6a7f00faea9dc75cd3e0a8ab2039932bfd5b02e580503193\": container with ID starting with 4dfe031091dda0dc6a7f00faea9dc75cd3e0a8ab2039932bfd5b02e580503193 not found: ID does not exist" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.000129 4981 scope.go:117] "RemoveContainer" containerID="c6cf6d530c76ca558c172a358a4ecbabeedb36f9d6ec144c2c2918c0fac05835" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.040163 4981 scope.go:117] "RemoveContainer" containerID="c6cf6d530c76ca558c172a358a4ecbabeedb36f9d6ec144c2c2918c0fac05835" Jan 28 15:08:30 crc kubenswrapper[4981]: E0128 15:08:30.041550 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6cf6d530c76ca558c172a358a4ecbabeedb36f9d6ec144c2c2918c0fac05835\": container with ID starting with c6cf6d530c76ca558c172a358a4ecbabeedb36f9d6ec144c2c2918c0fac05835 not found: ID does not exist" containerID="c6cf6d530c76ca558c172a358a4ecbabeedb36f9d6ec144c2c2918c0fac05835" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.041635 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6cf6d530c76ca558c172a358a4ecbabeedb36f9d6ec144c2c2918c0fac05835"} err="failed to get container status \"c6cf6d530c76ca558c172a358a4ecbabeedb36f9d6ec144c2c2918c0fac05835\": rpc error: code = NotFound desc = could not find container \"c6cf6d530c76ca558c172a358a4ecbabeedb36f9d6ec144c2c2918c0fac05835\": container with ID starting with c6cf6d530c76ca558c172a358a4ecbabeedb36f9d6ec144c2c2918c0fac05835 not found: ID does not exist" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.044076 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc28t"] Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.050052 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-lc28t"] Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.054663 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j"] Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.058400 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zsq6j"] Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.734771 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn"] Jan 28 15:08:30 crc kubenswrapper[4981]: E0128 15:08:30.735164 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70" containerName="route-controller-manager" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.735214 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70" containerName="route-controller-manager" Jan 28 15:08:30 crc kubenswrapper[4981]: E0128 15:08:30.735230 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.735242 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:08:30 crc kubenswrapper[4981]: E0128 15:08:30.735260 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0008f36-c407-4f88-9da3-55a32f23bf4d" containerName="controller-manager" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.735275 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0008f36-c407-4f88-9da3-55a32f23bf4d" containerName="controller-manager" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.735439 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0008f36-c407-4f88-9da3-55a32f23bf4d" containerName="controller-manager" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.735461 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.735484 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70" containerName="route-controller-manager" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.736093 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.738302 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.738339 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.738683 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.738853 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s"] Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.738883 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.738948 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.739063 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.739645 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.743801 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.744085 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.744209 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.744375 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.744428 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.744774 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.756806 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s"] Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.756989 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.764339 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn"] Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.894812 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dec7f91e-c9f4-4450-abd5-46305526d3a9-serving-cert\") pod \"controller-manager-bd8d95bb7-j4p8s\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.894952 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97d19a2-ea0e-42fd-9bc1-473e64827444-serving-cert\") pod \"route-controller-manager-9b46d45b9-b64kn\" (UID: \"b97d19a2-ea0e-42fd-9bc1-473e64827444\") " pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.895006 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-proxy-ca-bundles\") pod \"controller-manager-bd8d95bb7-j4p8s\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.895077 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m27h\" (UniqueName: \"kubernetes.io/projected/dec7f91e-c9f4-4450-abd5-46305526d3a9-kube-api-access-6m27h\") pod \"controller-manager-bd8d95bb7-j4p8s\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.895145 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b97d19a2-ea0e-42fd-9bc1-473e64827444-config\") pod \"route-controller-manager-9b46d45b9-b64kn\" (UID: \"b97d19a2-ea0e-42fd-9bc1-473e64827444\") " pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.895230 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b97d19a2-ea0e-42fd-9bc1-473e64827444-client-ca\") pod \"route-controller-manager-9b46d45b9-b64kn\" (UID: \"b97d19a2-ea0e-42fd-9bc1-473e64827444\") " pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.895350 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96x6w\" (UniqueName: \"kubernetes.io/projected/b97d19a2-ea0e-42fd-9bc1-473e64827444-kube-api-access-96x6w\") pod \"route-controller-manager-9b46d45b9-b64kn\" (UID: \"b97d19a2-ea0e-42fd-9bc1-473e64827444\") " pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.895434 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-config\") pod \"controller-manager-bd8d95bb7-j4p8s\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.895471 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-client-ca\") pod \"controller-manager-bd8d95bb7-j4p8s\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.996490 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dec7f91e-c9f4-4450-abd5-46305526d3a9-serving-cert\") pod \"controller-manager-bd8d95bb7-j4p8s\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.996586 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97d19a2-ea0e-42fd-9bc1-473e64827444-serving-cert\") pod \"route-controller-manager-9b46d45b9-b64kn\" (UID: \"b97d19a2-ea0e-42fd-9bc1-473e64827444\") " pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.996620 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-proxy-ca-bundles\") pod \"controller-manager-bd8d95bb7-j4p8s\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.996669 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m27h\" (UniqueName: \"kubernetes.io/projected/dec7f91e-c9f4-4450-abd5-46305526d3a9-kube-api-access-6m27h\") pod \"controller-manager-bd8d95bb7-j4p8s\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.996712 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b97d19a2-ea0e-42fd-9bc1-473e64827444-config\") pod \"route-controller-manager-9b46d45b9-b64kn\" (UID: \"b97d19a2-ea0e-42fd-9bc1-473e64827444\") " pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.996748 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b97d19a2-ea0e-42fd-9bc1-473e64827444-client-ca\") pod \"route-controller-manager-9b46d45b9-b64kn\" (UID: \"b97d19a2-ea0e-42fd-9bc1-473e64827444\") " pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.996780 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96x6w\" (UniqueName: \"kubernetes.io/projected/b97d19a2-ea0e-42fd-9bc1-473e64827444-kube-api-access-96x6w\") pod \"route-controller-manager-9b46d45b9-b64kn\" (UID: \"b97d19a2-ea0e-42fd-9bc1-473e64827444\") " pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.996823 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-config\") pod \"controller-manager-bd8d95bb7-j4p8s\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.996852 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-client-ca\") pod \"controller-manager-bd8d95bb7-j4p8s\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.998596 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-client-ca\") pod \"controller-manager-bd8d95bb7-j4p8s\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.999571 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b97d19a2-ea0e-42fd-9bc1-473e64827444-client-ca\") pod \"route-controller-manager-9b46d45b9-b64kn\" (UID: \"b97d19a2-ea0e-42fd-9bc1-473e64827444\") " pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:30 crc kubenswrapper[4981]: I0128 15:08:30.999684 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b97d19a2-ea0e-42fd-9bc1-473e64827444-config\") pod \"route-controller-manager-9b46d45b9-b64kn\" (UID: \"b97d19a2-ea0e-42fd-9bc1-473e64827444\") " pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:31 crc kubenswrapper[4981]: I0128 15:08:31.001133 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-config\") pod \"controller-manager-bd8d95bb7-j4p8s\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:31 crc kubenswrapper[4981]: I0128 15:08:31.002119 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-proxy-ca-bundles\") pod \"controller-manager-bd8d95bb7-j4p8s\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:31 crc kubenswrapper[4981]: I0128 15:08:31.002998 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dec7f91e-c9f4-4450-abd5-46305526d3a9-serving-cert\") pod \"controller-manager-bd8d95bb7-j4p8s\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:31 crc kubenswrapper[4981]: I0128 15:08:31.004059 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b97d19a2-ea0e-42fd-9bc1-473e64827444-serving-cert\") pod \"route-controller-manager-9b46d45b9-b64kn\" (UID: \"b97d19a2-ea0e-42fd-9bc1-473e64827444\") " pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:31 crc kubenswrapper[4981]: I0128 15:08:31.016747 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96x6w\" (UniqueName: \"kubernetes.io/projected/b97d19a2-ea0e-42fd-9bc1-473e64827444-kube-api-access-96x6w\") pod \"route-controller-manager-9b46d45b9-b64kn\" (UID: \"b97d19a2-ea0e-42fd-9bc1-473e64827444\") " pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:31 crc kubenswrapper[4981]: I0128 15:08:31.032544 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m27h\" (UniqueName: \"kubernetes.io/projected/dec7f91e-c9f4-4450-abd5-46305526d3a9-kube-api-access-6m27h\") pod \"controller-manager-bd8d95bb7-j4p8s\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:31 crc kubenswrapper[4981]: I0128 15:08:31.064551 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:31 crc kubenswrapper[4981]: I0128 15:08:31.076463 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:31 crc kubenswrapper[4981]: I0128 15:08:31.346871 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0008f36-c407-4f88-9da3-55a32f23bf4d" path="/var/lib/kubelet/pods/c0008f36-c407-4f88-9da3-55a32f23bf4d/volumes" Jan 28 15:08:31 crc kubenswrapper[4981]: I0128 15:08:31.348318 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70" path="/var/lib/kubelet/pods/f6200d3a-e3c8-40d0-bf2b-777f3ed0cb70/volumes" Jan 28 15:08:31 crc kubenswrapper[4981]: I0128 15:08:31.348776 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s"] Jan 28 15:08:31 crc kubenswrapper[4981]: I0128 15:08:31.513160 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn"] Jan 28 15:08:31 crc kubenswrapper[4981]: W0128 15:08:31.517784 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb97d19a2_ea0e_42fd_9bc1_473e64827444.slice/crio-997f13943affa3c5619c24a342731bb66a8956de5603257625f70768c20b12b2 WatchSource:0}: Error finding container 997f13943affa3c5619c24a342731bb66a8956de5603257625f70768c20b12b2: Status 404 returned error can't find the container with id 997f13943affa3c5619c24a342731bb66a8956de5603257625f70768c20b12b2 Jan 28 15:08:32 crc kubenswrapper[4981]: I0128 15:08:31.999513 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" event={"ID":"b97d19a2-ea0e-42fd-9bc1-473e64827444","Type":"ContainerStarted","Data":"304903033ef119db78c7b389c3ebc3e12427ebc73039265dae136f70ee82a94f"} Jan 28 15:08:32 crc kubenswrapper[4981]: I0128 15:08:32.000082 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:32 crc kubenswrapper[4981]: I0128 15:08:32.000100 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" event={"ID":"b97d19a2-ea0e-42fd-9bc1-473e64827444","Type":"ContainerStarted","Data":"997f13943affa3c5619c24a342731bb66a8956de5603257625f70768c20b12b2"} Jan 28 15:08:32 crc kubenswrapper[4981]: I0128 15:08:32.000711 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" event={"ID":"dec7f91e-c9f4-4450-abd5-46305526d3a9","Type":"ContainerStarted","Data":"06773990ee71b4f8d7b0e539e7263c56b4d3c0304f1c6248b49b63ec86ccbd1d"} Jan 28 15:08:32 crc kubenswrapper[4981]: I0128 15:08:32.000735 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" event={"ID":"dec7f91e-c9f4-4450-abd5-46305526d3a9","Type":"ContainerStarted","Data":"a7be010f56b2486160578ff77345dc4ce2b14d90a01bc49b9eb86f841d392d31"} Jan 28 15:08:32 crc kubenswrapper[4981]: I0128 15:08:32.001298 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:32 crc kubenswrapper[4981]: I0128 15:08:32.008446 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:32 crc kubenswrapper[4981]: I0128 15:08:32.032171 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" podStartSLOduration=4.032147037 podStartE2EDuration="4.032147037s" podCreationTimestamp="2026-01-28 15:08:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:08:32.031693902 +0000 UTC m=+323.483852143" watchObservedRunningTime="2026-01-28 15:08:32.032147037 +0000 UTC m=+323.484305278" Jan 28 15:08:32 crc kubenswrapper[4981]: I0128 15:08:32.056472 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" podStartSLOduration=3.056455556 podStartE2EDuration="3.056455556s" podCreationTimestamp="2026-01-28 15:08:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:08:32.053390635 +0000 UTC m=+323.505548876" watchObservedRunningTime="2026-01-28 15:08:32.056455556 +0000 UTC m=+323.508613797" Jan 28 15:08:32 crc kubenswrapper[4981]: I0128 15:08:32.059784 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9b46d45b9-b64kn" Jan 28 15:08:48 crc kubenswrapper[4981]: I0128 15:08:48.838932 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s"] Jan 28 15:08:48 crc kubenswrapper[4981]: I0128 15:08:48.839635 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" podUID="dec7f91e-c9f4-4450-abd5-46305526d3a9" containerName="controller-manager" containerID="cri-o://06773990ee71b4f8d7b0e539e7263c56b4d3c0304f1c6248b49b63ec86ccbd1d" gracePeriod=30 Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.116076 4981 generic.go:334] "Generic (PLEG): container finished" podID="dec7f91e-c9f4-4450-abd5-46305526d3a9" containerID="06773990ee71b4f8d7b0e539e7263c56b4d3c0304f1c6248b49b63ec86ccbd1d" exitCode=0 Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.116136 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" event={"ID":"dec7f91e-c9f4-4450-abd5-46305526d3a9","Type":"ContainerDied","Data":"06773990ee71b4f8d7b0e539e7263c56b4d3c0304f1c6248b49b63ec86ccbd1d"} Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.448557 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.560372 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-client-ca\") pod \"dec7f91e-c9f4-4450-abd5-46305526d3a9\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.561366 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "dec7f91e-c9f4-4450-abd5-46305526d3a9" (UID: "dec7f91e-c9f4-4450-abd5-46305526d3a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.561425 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-proxy-ca-bundles\") pod \"dec7f91e-c9f4-4450-abd5-46305526d3a9\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.561651 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dec7f91e-c9f4-4450-abd5-46305526d3a9-serving-cert\") pod \"dec7f91e-c9f4-4450-abd5-46305526d3a9\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.561857 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m27h\" (UniqueName: \"kubernetes.io/projected/dec7f91e-c9f4-4450-abd5-46305526d3a9-kube-api-access-6m27h\") pod \"dec7f91e-c9f4-4450-abd5-46305526d3a9\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.562055 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-config\") pod \"dec7f91e-c9f4-4450-abd5-46305526d3a9\" (UID: \"dec7f91e-c9f4-4450-abd5-46305526d3a9\") " Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.562385 4981 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.562106 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "dec7f91e-c9f4-4450-abd5-46305526d3a9" (UID: "dec7f91e-c9f4-4450-abd5-46305526d3a9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.563062 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-config" (OuterVolumeSpecName: "config") pod "dec7f91e-c9f4-4450-abd5-46305526d3a9" (UID: "dec7f91e-c9f4-4450-abd5-46305526d3a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.569776 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dec7f91e-c9f4-4450-abd5-46305526d3a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dec7f91e-c9f4-4450-abd5-46305526d3a9" (UID: "dec7f91e-c9f4-4450-abd5-46305526d3a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.570022 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dec7f91e-c9f4-4450-abd5-46305526d3a9-kube-api-access-6m27h" (OuterVolumeSpecName: "kube-api-access-6m27h") pod "dec7f91e-c9f4-4450-abd5-46305526d3a9" (UID: "dec7f91e-c9f4-4450-abd5-46305526d3a9"). InnerVolumeSpecName "kube-api-access-6m27h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.663615 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.663668 4981 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dec7f91e-c9f4-4450-abd5-46305526d3a9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.663696 4981 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dec7f91e-c9f4-4450-abd5-46305526d3a9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:49 crc kubenswrapper[4981]: I0128 15:08:49.663721 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m27h\" (UniqueName: \"kubernetes.io/projected/dec7f91e-c9f4-4450-abd5-46305526d3a9-kube-api-access-6m27h\") on node \"crc\" DevicePath \"\"" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.125948 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" event={"ID":"dec7f91e-c9f4-4450-abd5-46305526d3a9","Type":"ContainerDied","Data":"a7be010f56b2486160578ff77345dc4ce2b14d90a01bc49b9eb86f841d392d31"} Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.126012 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.126040 4981 scope.go:117] "RemoveContainer" containerID="06773990ee71b4f8d7b0e539e7263c56b4d3c0304f1c6248b49b63ec86ccbd1d" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.182537 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s"] Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.188904 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bd8d95bb7-j4p8s"] Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.751943 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-796dc64f67-zwncz"] Jan 28 15:08:50 crc kubenswrapper[4981]: E0128 15:08:50.752640 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dec7f91e-c9f4-4450-abd5-46305526d3a9" containerName="controller-manager" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.752663 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="dec7f91e-c9f4-4450-abd5-46305526d3a9" containerName="controller-manager" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.752865 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="dec7f91e-c9f4-4450-abd5-46305526d3a9" containerName="controller-manager" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.753526 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.758316 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.758926 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.760289 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.760615 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.763052 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.772465 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.777549 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-796dc64f67-zwncz"] Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.779688 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.880235 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b47e4cdb-e600-4e3c-95ef-6204aebc5c07-client-ca\") pod \"controller-manager-796dc64f67-zwncz\" (UID: \"b47e4cdb-e600-4e3c-95ef-6204aebc5c07\") " pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.880356 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b47e4cdb-e600-4e3c-95ef-6204aebc5c07-serving-cert\") pod \"controller-manager-796dc64f67-zwncz\" (UID: \"b47e4cdb-e600-4e3c-95ef-6204aebc5c07\") " pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.880398 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b47e4cdb-e600-4e3c-95ef-6204aebc5c07-config\") pod \"controller-manager-796dc64f67-zwncz\" (UID: \"b47e4cdb-e600-4e3c-95ef-6204aebc5c07\") " pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.880426 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxz4q\" (UniqueName: \"kubernetes.io/projected/b47e4cdb-e600-4e3c-95ef-6204aebc5c07-kube-api-access-sxz4q\") pod \"controller-manager-796dc64f67-zwncz\" (UID: \"b47e4cdb-e600-4e3c-95ef-6204aebc5c07\") " pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.880750 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b47e4cdb-e600-4e3c-95ef-6204aebc5c07-proxy-ca-bundles\") pod \"controller-manager-796dc64f67-zwncz\" (UID: \"b47e4cdb-e600-4e3c-95ef-6204aebc5c07\") " pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.982741 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b47e4cdb-e600-4e3c-95ef-6204aebc5c07-proxy-ca-bundles\") pod \"controller-manager-796dc64f67-zwncz\" (UID: \"b47e4cdb-e600-4e3c-95ef-6204aebc5c07\") " pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.982840 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b47e4cdb-e600-4e3c-95ef-6204aebc5c07-client-ca\") pod \"controller-manager-796dc64f67-zwncz\" (UID: \"b47e4cdb-e600-4e3c-95ef-6204aebc5c07\") " pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.982944 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b47e4cdb-e600-4e3c-95ef-6204aebc5c07-serving-cert\") pod \"controller-manager-796dc64f67-zwncz\" (UID: \"b47e4cdb-e600-4e3c-95ef-6204aebc5c07\") " pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.983021 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b47e4cdb-e600-4e3c-95ef-6204aebc5c07-config\") pod \"controller-manager-796dc64f67-zwncz\" (UID: \"b47e4cdb-e600-4e3c-95ef-6204aebc5c07\") " pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.983073 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxz4q\" (UniqueName: \"kubernetes.io/projected/b47e4cdb-e600-4e3c-95ef-6204aebc5c07-kube-api-access-sxz4q\") pod \"controller-manager-796dc64f67-zwncz\" (UID: \"b47e4cdb-e600-4e3c-95ef-6204aebc5c07\") " pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.985369 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b47e4cdb-e600-4e3c-95ef-6204aebc5c07-proxy-ca-bundles\") pod \"controller-manager-796dc64f67-zwncz\" (UID: \"b47e4cdb-e600-4e3c-95ef-6204aebc5c07\") " pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.985491 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b47e4cdb-e600-4e3c-95ef-6204aebc5c07-client-ca\") pod \"controller-manager-796dc64f67-zwncz\" (UID: \"b47e4cdb-e600-4e3c-95ef-6204aebc5c07\") " pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.986588 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b47e4cdb-e600-4e3c-95ef-6204aebc5c07-config\") pod \"controller-manager-796dc64f67-zwncz\" (UID: \"b47e4cdb-e600-4e3c-95ef-6204aebc5c07\") " pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:50 crc kubenswrapper[4981]: I0128 15:08:50.996466 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b47e4cdb-e600-4e3c-95ef-6204aebc5c07-serving-cert\") pod \"controller-manager-796dc64f67-zwncz\" (UID: \"b47e4cdb-e600-4e3c-95ef-6204aebc5c07\") " pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:51 crc kubenswrapper[4981]: I0128 15:08:51.007612 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxz4q\" (UniqueName: \"kubernetes.io/projected/b47e4cdb-e600-4e3c-95ef-6204aebc5c07-kube-api-access-sxz4q\") pod \"controller-manager-796dc64f67-zwncz\" (UID: \"b47e4cdb-e600-4e3c-95ef-6204aebc5c07\") " pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:51 crc kubenswrapper[4981]: I0128 15:08:51.084416 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:51 crc kubenswrapper[4981]: I0128 15:08:51.326265 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dec7f91e-c9f4-4450-abd5-46305526d3a9" path="/var/lib/kubelet/pods/dec7f91e-c9f4-4450-abd5-46305526d3a9/volumes" Jan 28 15:08:51 crc kubenswrapper[4981]: I0128 15:08:51.549269 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-796dc64f67-zwncz"] Jan 28 15:08:52 crc kubenswrapper[4981]: I0128 15:08:52.149511 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" event={"ID":"b47e4cdb-e600-4e3c-95ef-6204aebc5c07","Type":"ContainerStarted","Data":"a40af047b54b9e1b262e54f883bc0c5f378cb4766e303d6da38d5f5458809ff0"} Jan 28 15:08:52 crc kubenswrapper[4981]: I0128 15:08:52.149855 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" event={"ID":"b47e4cdb-e600-4e3c-95ef-6204aebc5c07","Type":"ContainerStarted","Data":"fc212bd6d4e0413f6693e2f9a48cc2fce00eb962e8e7cbb744ec89436f12af7a"} Jan 28 15:08:52 crc kubenswrapper[4981]: I0128 15:08:52.149884 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:52 crc kubenswrapper[4981]: I0128 15:08:52.155064 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" Jan 28 15:08:52 crc kubenswrapper[4981]: I0128 15:08:52.193142 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-796dc64f67-zwncz" podStartSLOduration=4.193121169 podStartE2EDuration="4.193121169s" podCreationTimestamp="2026-01-28 15:08:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:08:52.169146792 +0000 UTC m=+343.621305033" watchObservedRunningTime="2026-01-28 15:08:52.193121169 +0000 UTC m=+343.645279430" Jan 28 15:09:19 crc kubenswrapper[4981]: I0128 15:09:19.897848 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:09:19 crc kubenswrapper[4981]: I0128 15:09:19.898570 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:09:38 crc kubenswrapper[4981]: I0128 15:09:38.945042 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ltxqv"] Jan 28 15:09:38 crc kubenswrapper[4981]: I0128 15:09:38.946818 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.005356 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ltxqv"] Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.101989 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.102168 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.102258 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.102300 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxxx9\" (UniqueName: \"kubernetes.io/projected/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-kube-api-access-fxxx9\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.102337 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-registry-tls\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.102483 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-bound-sa-token\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.102554 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-registry-certificates\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.102609 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-trusted-ca\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.126656 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.204414 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.204535 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxxx9\" (UniqueName: \"kubernetes.io/projected/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-kube-api-access-fxxx9\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.204574 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-registry-tls\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.204621 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-bound-sa-token\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.204656 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-registry-certificates\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.204692 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-trusted-ca\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.204756 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.208120 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-trusted-ca\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.208849 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-ca-trust-extracted\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.209004 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-registry-certificates\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.213527 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-installation-pull-secrets\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.217351 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-registry-tls\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.233909 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxxx9\" (UniqueName: \"kubernetes.io/projected/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-kube-api-access-fxxx9\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.235479 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091-bound-sa-token\") pod \"image-registry-66df7c8f76-ltxqv\" (UID: \"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091\") " pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.319442 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:39 crc kubenswrapper[4981]: I0128 15:09:39.822487 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-ltxqv"] Jan 28 15:09:40 crc kubenswrapper[4981]: I0128 15:09:40.470837 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" event={"ID":"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091","Type":"ContainerStarted","Data":"be34b4c44834042a07633e580ef1ce16523451b36f8ba74f8780774b51193cdd"} Jan 28 15:09:40 crc kubenswrapper[4981]: I0128 15:09:40.471308 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" event={"ID":"5f7e43c9-4fa4-4669-b7af-a2cfe8b1c091","Type":"ContainerStarted","Data":"7a7337c21c795b163c701fe54afd7be8265686d3c2376d6d15e002dc736dd524"} Jan 28 15:09:40 crc kubenswrapper[4981]: I0128 15:09:40.471346 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:40 crc kubenswrapper[4981]: I0128 15:09:40.497123 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" podStartSLOduration=2.497097569 podStartE2EDuration="2.497097569s" podCreationTimestamp="2026-01-28 15:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:09:40.494221759 +0000 UTC m=+391.946380080" watchObservedRunningTime="2026-01-28 15:09:40.497097569 +0000 UTC m=+391.949255850" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.430124 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rnbwh"] Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.431227 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rnbwh" podUID="10fb71f7-ebd9-4ce7-91e8-cabe64948d68" containerName="registry-server" containerID="cri-o://214cb5050b07c68d3f285bda2e97e86b258f47d6a99ec8a4cf495965030c95b1" gracePeriod=30 Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.435111 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9gspg"] Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.435418 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9gspg" podUID="06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" containerName="registry-server" containerID="cri-o://35a94a3b5c466b0a495713441cfa24da7be3a0af42454266ffcc92f01c1b11d8" gracePeriod=30 Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.453139 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nskbs"] Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.453393 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" podUID="a2889542-ffb9-4af8-8f77-ccfd601dec88" containerName="marketplace-operator" containerID="cri-o://8fd2dad8b1bbd6c41ea3886470f4050c51bc2f180988277d4765f595e236556d" gracePeriod=30 Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.466119 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v42ln"] Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.466427 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v42ln" podUID="20cbd927-94b5-452e-a139-8d797dd4f4f7" containerName="registry-server" containerID="cri-o://7b7250fca34e373bbeec0f2d174bd84c9c32b8c4202cdecce0e4e641c0f2d188" gracePeriod=30 Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.472594 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-49xjs"] Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.477421 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.492322 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqqn8"] Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.492590 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dqqn8" podUID="b1101605-52b4-4c83-9958-11c0fe93d5e3" containerName="registry-server" containerID="cri-o://4fd51047e389a4ad226b82e520ca1486e074cc00078e47301625638874d5d73b" gracePeriod=30 Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.497827 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-49xjs"] Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.518128 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxk8x\" (UniqueName: \"kubernetes.io/projected/8b6270bb-35bf-4292-b065-b6572531a590-kube-api-access-mxk8x\") pod \"marketplace-operator-79b997595-49xjs\" (UID: \"8b6270bb-35bf-4292-b065-b6572531a590\") " pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.518325 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8b6270bb-35bf-4292-b065-b6572531a590-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-49xjs\" (UID: \"8b6270bb-35bf-4292-b065-b6572531a590\") " pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.518380 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b6270bb-35bf-4292-b065-b6572531a590-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-49xjs\" (UID: \"8b6270bb-35bf-4292-b065-b6572531a590\") " pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.620085 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b6270bb-35bf-4292-b065-b6572531a590-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-49xjs\" (UID: \"8b6270bb-35bf-4292-b065-b6572531a590\") " pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.620156 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxk8x\" (UniqueName: \"kubernetes.io/projected/8b6270bb-35bf-4292-b065-b6572531a590-kube-api-access-mxk8x\") pod \"marketplace-operator-79b997595-49xjs\" (UID: \"8b6270bb-35bf-4292-b065-b6572531a590\") " pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.620216 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8b6270bb-35bf-4292-b065-b6572531a590-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-49xjs\" (UID: \"8b6270bb-35bf-4292-b065-b6572531a590\") " pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.621588 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8b6270bb-35bf-4292-b065-b6572531a590-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-49xjs\" (UID: \"8b6270bb-35bf-4292-b065-b6572531a590\") " pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.630958 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8b6270bb-35bf-4292-b065-b6572531a590-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-49xjs\" (UID: \"8b6270bb-35bf-4292-b065-b6572531a590\") " pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.639756 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxk8x\" (UniqueName: \"kubernetes.io/projected/8b6270bb-35bf-4292-b065-b6572531a590-kube-api-access-mxk8x\") pod \"marketplace-operator-79b997595-49xjs\" (UID: \"8b6270bb-35bf-4292-b065-b6572531a590\") " pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.803345 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.873099 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.898683 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.898763 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.927460 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-utilities\") pod \"10fb71f7-ebd9-4ce7-91e8-cabe64948d68\" (UID: \"10fb71f7-ebd9-4ce7-91e8-cabe64948d68\") " Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.927567 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv2kc\" (UniqueName: \"kubernetes.io/projected/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-kube-api-access-bv2kc\") pod \"10fb71f7-ebd9-4ce7-91e8-cabe64948d68\" (UID: \"10fb71f7-ebd9-4ce7-91e8-cabe64948d68\") " Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.927586 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-catalog-content\") pod \"10fb71f7-ebd9-4ce7-91e8-cabe64948d68\" (UID: \"10fb71f7-ebd9-4ce7-91e8-cabe64948d68\") " Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.928806 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-utilities" (OuterVolumeSpecName: "utilities") pod "10fb71f7-ebd9-4ce7-91e8-cabe64948d68" (UID: "10fb71f7-ebd9-4ce7-91e8-cabe64948d68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.932963 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-kube-api-access-bv2kc" (OuterVolumeSpecName: "kube-api-access-bv2kc") pod "10fb71f7-ebd9-4ce7-91e8-cabe64948d68" (UID: "10fb71f7-ebd9-4ce7-91e8-cabe64948d68"). InnerVolumeSpecName "kube-api-access-bv2kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.961448 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.989653 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10fb71f7-ebd9-4ce7-91e8-cabe64948d68" (UID: "10fb71f7-ebd9-4ce7-91e8-cabe64948d68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:49 crc kubenswrapper[4981]: I0128 15:09:49.999241 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.000996 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.029705 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cbd927-94b5-452e-a139-8d797dd4f4f7-catalog-content\") pod \"20cbd927-94b5-452e-a139-8d797dd4f4f7\" (UID: \"20cbd927-94b5-452e-a139-8d797dd4f4f7\") " Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.029825 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cbd927-94b5-452e-a139-8d797dd4f4f7-utilities\") pod \"20cbd927-94b5-452e-a139-8d797dd4f4f7\" (UID: \"20cbd927-94b5-452e-a139-8d797dd4f4f7\") " Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.029945 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llwzs\" (UniqueName: \"kubernetes.io/projected/20cbd927-94b5-452e-a139-8d797dd4f4f7-kube-api-access-llwzs\") pod \"20cbd927-94b5-452e-a139-8d797dd4f4f7\" (UID: \"20cbd927-94b5-452e-a139-8d797dd4f4f7\") " Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.031706 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20cbd927-94b5-452e-a139-8d797dd4f4f7-utilities" (OuterVolumeSpecName: "utilities") pod "20cbd927-94b5-452e-a139-8d797dd4f4f7" (UID: "20cbd927-94b5-452e-a139-8d797dd4f4f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.032545 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/20cbd927-94b5-452e-a139-8d797dd4f4f7-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.032566 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.032576 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv2kc\" (UniqueName: \"kubernetes.io/projected/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-kube-api-access-bv2kc\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.032585 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10fb71f7-ebd9-4ce7-91e8-cabe64948d68-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.034127 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20cbd927-94b5-452e-a139-8d797dd4f4f7-kube-api-access-llwzs" (OuterVolumeSpecName: "kube-api-access-llwzs") pod "20cbd927-94b5-452e-a139-8d797dd4f4f7" (UID: "20cbd927-94b5-452e-a139-8d797dd4f4f7"). InnerVolumeSpecName "kube-api-access-llwzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.052484 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.069098 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20cbd927-94b5-452e-a139-8d797dd4f4f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "20cbd927-94b5-452e-a139-8d797dd4f4f7" (UID: "20cbd927-94b5-452e-a139-8d797dd4f4f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.133532 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1101605-52b4-4c83-9958-11c0fe93d5e3-utilities\") pod \"b1101605-52b4-4c83-9958-11c0fe93d5e3\" (UID: \"b1101605-52b4-4c83-9958-11c0fe93d5e3\") " Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.133840 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-catalog-content\") pod \"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1\" (UID: \"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1\") " Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.133866 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46mxf\" (UniqueName: \"kubernetes.io/projected/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-kube-api-access-46mxf\") pod \"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1\" (UID: \"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1\") " Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.133895 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-utilities\") pod \"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1\" (UID: \"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1\") " Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.133910 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1101605-52b4-4c83-9958-11c0fe93d5e3-catalog-content\") pod \"b1101605-52b4-4c83-9958-11c0fe93d5e3\" (UID: \"b1101605-52b4-4c83-9958-11c0fe93d5e3\") " Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.133975 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82dk9\" (UniqueName: \"kubernetes.io/projected/b1101605-52b4-4c83-9958-11c0fe93d5e3-kube-api-access-82dk9\") pod \"b1101605-52b4-4c83-9958-11c0fe93d5e3\" (UID: \"b1101605-52b4-4c83-9958-11c0fe93d5e3\") " Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.133996 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2889542-ffb9-4af8-8f77-ccfd601dec88-marketplace-trusted-ca\") pod \"a2889542-ffb9-4af8-8f77-ccfd601dec88\" (UID: \"a2889542-ffb9-4af8-8f77-ccfd601dec88\") " Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.134035 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m4fz\" (UniqueName: \"kubernetes.io/projected/a2889542-ffb9-4af8-8f77-ccfd601dec88-kube-api-access-5m4fz\") pod \"a2889542-ffb9-4af8-8f77-ccfd601dec88\" (UID: \"a2889542-ffb9-4af8-8f77-ccfd601dec88\") " Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.134070 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2889542-ffb9-4af8-8f77-ccfd601dec88-marketplace-operator-metrics\") pod \"a2889542-ffb9-4af8-8f77-ccfd601dec88\" (UID: \"a2889542-ffb9-4af8-8f77-ccfd601dec88\") " Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.134294 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llwzs\" (UniqueName: \"kubernetes.io/projected/20cbd927-94b5-452e-a139-8d797dd4f4f7-kube-api-access-llwzs\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.134310 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/20cbd927-94b5-452e-a139-8d797dd4f4f7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.134683 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-utilities" (OuterVolumeSpecName: "utilities") pod "06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" (UID: "06d03aa9-a3ff-46c7-bafd-4666c5adf6c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.135051 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2889542-ffb9-4af8-8f77-ccfd601dec88-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "a2889542-ffb9-4af8-8f77-ccfd601dec88" (UID: "a2889542-ffb9-4af8-8f77-ccfd601dec88"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.135434 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1101605-52b4-4c83-9958-11c0fe93d5e3-utilities" (OuterVolumeSpecName: "utilities") pod "b1101605-52b4-4c83-9958-11c0fe93d5e3" (UID: "b1101605-52b4-4c83-9958-11c0fe93d5e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.136722 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-kube-api-access-46mxf" (OuterVolumeSpecName: "kube-api-access-46mxf") pod "06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" (UID: "06d03aa9-a3ff-46c7-bafd-4666c5adf6c1"). InnerVolumeSpecName "kube-api-access-46mxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.137385 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2889542-ffb9-4af8-8f77-ccfd601dec88-kube-api-access-5m4fz" (OuterVolumeSpecName: "kube-api-access-5m4fz") pod "a2889542-ffb9-4af8-8f77-ccfd601dec88" (UID: "a2889542-ffb9-4af8-8f77-ccfd601dec88"). InnerVolumeSpecName "kube-api-access-5m4fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.137630 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1101605-52b4-4c83-9958-11c0fe93d5e3-kube-api-access-82dk9" (OuterVolumeSpecName: "kube-api-access-82dk9") pod "b1101605-52b4-4c83-9958-11c0fe93d5e3" (UID: "b1101605-52b4-4c83-9958-11c0fe93d5e3"). InnerVolumeSpecName "kube-api-access-82dk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.137748 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2889542-ffb9-4af8-8f77-ccfd601dec88-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "a2889542-ffb9-4af8-8f77-ccfd601dec88" (UID: "a2889542-ffb9-4af8-8f77-ccfd601dec88"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.182725 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" (UID: "06d03aa9-a3ff-46c7-bafd-4666c5adf6c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.235228 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m4fz\" (UniqueName: \"kubernetes.io/projected/a2889542-ffb9-4af8-8f77-ccfd601dec88-kube-api-access-5m4fz\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.235253 4981 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a2889542-ffb9-4af8-8f77-ccfd601dec88-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.235264 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1101605-52b4-4c83-9958-11c0fe93d5e3-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.235273 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.235283 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46mxf\" (UniqueName: \"kubernetes.io/projected/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-kube-api-access-46mxf\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.235292 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.235301 4981 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a2889542-ffb9-4af8-8f77-ccfd601dec88-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.235313 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82dk9\" (UniqueName: \"kubernetes.io/projected/b1101605-52b4-4c83-9958-11c0fe93d5e3-kube-api-access-82dk9\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.254425 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1101605-52b4-4c83-9958-11c0fe93d5e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1101605-52b4-4c83-9958-11c0fe93d5e3" (UID: "b1101605-52b4-4c83-9958-11c0fe93d5e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.259245 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-49xjs"] Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.336149 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1101605-52b4-4c83-9958-11c0fe93d5e3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.537741 4981 generic.go:334] "Generic (PLEG): container finished" podID="a2889542-ffb9-4af8-8f77-ccfd601dec88" containerID="8fd2dad8b1bbd6c41ea3886470f4050c51bc2f180988277d4765f595e236556d" exitCode=0 Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.537824 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" event={"ID":"a2889542-ffb9-4af8-8f77-ccfd601dec88","Type":"ContainerDied","Data":"8fd2dad8b1bbd6c41ea3886470f4050c51bc2f180988277d4765f595e236556d"} Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.537853 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" event={"ID":"a2889542-ffb9-4af8-8f77-ccfd601dec88","Type":"ContainerDied","Data":"83d1563b931d07ae0ad36956d659836797a3f12e687f90c502db233620991e02"} Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.537869 4981 scope.go:117] "RemoveContainer" containerID="8fd2dad8b1bbd6c41ea3886470f4050c51bc2f180988277d4765f595e236556d" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.537826 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nskbs" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.541634 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" event={"ID":"8b6270bb-35bf-4292-b065-b6572531a590","Type":"ContainerStarted","Data":"75b5deb7b80542948b32967f6b2aadfa769eb64cbfdac2f5a148d2c617f7bd25"} Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.541686 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" event={"ID":"8b6270bb-35bf-4292-b065-b6572531a590","Type":"ContainerStarted","Data":"c602973421304879ae12df71620f07b104627ba11ae23656cc7bde36468b2e45"} Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.541830 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.543109 4981 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-49xjs container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" start-of-body= Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.543148 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" podUID="8b6270bb-35bf-4292-b065-b6572531a590" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.551142 4981 generic.go:334] "Generic (PLEG): container finished" podID="20cbd927-94b5-452e-a139-8d797dd4f4f7" containerID="7b7250fca34e373bbeec0f2d174bd84c9c32b8c4202cdecce0e4e641c0f2d188" exitCode=0 Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.551228 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v42ln" event={"ID":"20cbd927-94b5-452e-a139-8d797dd4f4f7","Type":"ContainerDied","Data":"7b7250fca34e373bbeec0f2d174bd84c9c32b8c4202cdecce0e4e641c0f2d188"} Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.551286 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v42ln" event={"ID":"20cbd927-94b5-452e-a139-8d797dd4f4f7","Type":"ContainerDied","Data":"3450df77b6cd1841c0c3292299d83aae1c011fdf9775f84d9691013bef6b0432"} Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.551805 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v42ln" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.554920 4981 generic.go:334] "Generic (PLEG): container finished" podID="b1101605-52b4-4c83-9958-11c0fe93d5e3" containerID="4fd51047e389a4ad226b82e520ca1486e074cc00078e47301625638874d5d73b" exitCode=0 Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.554984 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqqn8" event={"ID":"b1101605-52b4-4c83-9958-11c0fe93d5e3","Type":"ContainerDied","Data":"4fd51047e389a4ad226b82e520ca1486e074cc00078e47301625638874d5d73b"} Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.555008 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dqqn8" event={"ID":"b1101605-52b4-4c83-9958-11c0fe93d5e3","Type":"ContainerDied","Data":"ccb0c44870cdd42b6f3807dc5873bc0a99cd0486bfa0bc84ecf29c6b09da06cc"} Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.555083 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dqqn8" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.558702 4981 generic.go:334] "Generic (PLEG): container finished" podID="06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" containerID="35a94a3b5c466b0a495713441cfa24da7be3a0af42454266ffcc92f01c1b11d8" exitCode=0 Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.558753 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gspg" event={"ID":"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1","Type":"ContainerDied","Data":"35a94a3b5c466b0a495713441cfa24da7be3a0af42454266ffcc92f01c1b11d8"} Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.558804 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gspg" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.558815 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gspg" event={"ID":"06d03aa9-a3ff-46c7-bafd-4666c5adf6c1","Type":"ContainerDied","Data":"d59b8c67a5e693e25ff6e14c66415d109a8ab5236d8401f4d0f694b9aa9eea96"} Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.562390 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" podStartSLOduration=1.56237523 podStartE2EDuration="1.56237523s" podCreationTimestamp="2026-01-28 15:09:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:09:50.561171886 +0000 UTC m=+402.013330137" watchObservedRunningTime="2026-01-28 15:09:50.56237523 +0000 UTC m=+402.014533481" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.567949 4981 scope.go:117] "RemoveContainer" containerID="7833d7e5892341d81930b3592d51102624eb31f053a148cbdc058abc4de2cb5e" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.570655 4981 generic.go:334] "Generic (PLEG): container finished" podID="10fb71f7-ebd9-4ce7-91e8-cabe64948d68" containerID="214cb5050b07c68d3f285bda2e97e86b258f47d6a99ec8a4cf495965030c95b1" exitCode=0 Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.570696 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnbwh" event={"ID":"10fb71f7-ebd9-4ce7-91e8-cabe64948d68","Type":"ContainerDied","Data":"214cb5050b07c68d3f285bda2e97e86b258f47d6a99ec8a4cf495965030c95b1"} Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.570722 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rnbwh" event={"ID":"10fb71f7-ebd9-4ce7-91e8-cabe64948d68","Type":"ContainerDied","Data":"a786052bab671b712f3277dabdd4ff1923ad6a394320da5c912cb88045d5bbe3"} Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.570722 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rnbwh" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.610315 4981 scope.go:117] "RemoveContainer" containerID="8fd2dad8b1bbd6c41ea3886470f4050c51bc2f180988277d4765f595e236556d" Jan 28 15:09:50 crc kubenswrapper[4981]: E0128 15:09:50.610721 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fd2dad8b1bbd6c41ea3886470f4050c51bc2f180988277d4765f595e236556d\": container with ID starting with 8fd2dad8b1bbd6c41ea3886470f4050c51bc2f180988277d4765f595e236556d not found: ID does not exist" containerID="8fd2dad8b1bbd6c41ea3886470f4050c51bc2f180988277d4765f595e236556d" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.610782 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fd2dad8b1bbd6c41ea3886470f4050c51bc2f180988277d4765f595e236556d"} err="failed to get container status \"8fd2dad8b1bbd6c41ea3886470f4050c51bc2f180988277d4765f595e236556d\": rpc error: code = NotFound desc = could not find container \"8fd2dad8b1bbd6c41ea3886470f4050c51bc2f180988277d4765f595e236556d\": container with ID starting with 8fd2dad8b1bbd6c41ea3886470f4050c51bc2f180988277d4765f595e236556d not found: ID does not exist" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.610817 4981 scope.go:117] "RemoveContainer" containerID="7833d7e5892341d81930b3592d51102624eb31f053a148cbdc058abc4de2cb5e" Jan 28 15:09:50 crc kubenswrapper[4981]: E0128 15:09:50.611153 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7833d7e5892341d81930b3592d51102624eb31f053a148cbdc058abc4de2cb5e\": container with ID starting with 7833d7e5892341d81930b3592d51102624eb31f053a148cbdc058abc4de2cb5e not found: ID does not exist" containerID="7833d7e5892341d81930b3592d51102624eb31f053a148cbdc058abc4de2cb5e" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.611221 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7833d7e5892341d81930b3592d51102624eb31f053a148cbdc058abc4de2cb5e"} err="failed to get container status \"7833d7e5892341d81930b3592d51102624eb31f053a148cbdc058abc4de2cb5e\": rpc error: code = NotFound desc = could not find container \"7833d7e5892341d81930b3592d51102624eb31f053a148cbdc058abc4de2cb5e\": container with ID starting with 7833d7e5892341d81930b3592d51102624eb31f053a148cbdc058abc4de2cb5e not found: ID does not exist" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.611256 4981 scope.go:117] "RemoveContainer" containerID="7b7250fca34e373bbeec0f2d174bd84c9c32b8c4202cdecce0e4e641c0f2d188" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.630715 4981 scope.go:117] "RemoveContainer" containerID="fac89806857ac938a954c7e5c1269b31b93472b2515b59501914cf920cd995c3" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.645270 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nskbs"] Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.654296 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nskbs"] Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.657051 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v42ln"] Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.663135 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v42ln"] Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.667151 4981 scope.go:117] "RemoveContainer" containerID="f60d550dfd389c6efc973d77297f6e1cf1ce9a1bcffd417d140edce3d9b363ac" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.667936 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9gspg"] Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.676759 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9gspg"] Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.682841 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rnbwh"] Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.690910 4981 scope.go:117] "RemoveContainer" containerID="7b7250fca34e373bbeec0f2d174bd84c9c32b8c4202cdecce0e4e641c0f2d188" Jan 28 15:09:50 crc kubenswrapper[4981]: E0128 15:09:50.691488 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b7250fca34e373bbeec0f2d174bd84c9c32b8c4202cdecce0e4e641c0f2d188\": container with ID starting with 7b7250fca34e373bbeec0f2d174bd84c9c32b8c4202cdecce0e4e641c0f2d188 not found: ID does not exist" containerID="7b7250fca34e373bbeec0f2d174bd84c9c32b8c4202cdecce0e4e641c0f2d188" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.691592 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b7250fca34e373bbeec0f2d174bd84c9c32b8c4202cdecce0e4e641c0f2d188"} err="failed to get container status \"7b7250fca34e373bbeec0f2d174bd84c9c32b8c4202cdecce0e4e641c0f2d188\": rpc error: code = NotFound desc = could not find container \"7b7250fca34e373bbeec0f2d174bd84c9c32b8c4202cdecce0e4e641c0f2d188\": container with ID starting with 7b7250fca34e373bbeec0f2d174bd84c9c32b8c4202cdecce0e4e641c0f2d188 not found: ID does not exist" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.691704 4981 scope.go:117] "RemoveContainer" containerID="fac89806857ac938a954c7e5c1269b31b93472b2515b59501914cf920cd995c3" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.692046 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rnbwh"] Jan 28 15:09:50 crc kubenswrapper[4981]: E0128 15:09:50.692113 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fac89806857ac938a954c7e5c1269b31b93472b2515b59501914cf920cd995c3\": container with ID starting with fac89806857ac938a954c7e5c1269b31b93472b2515b59501914cf920cd995c3 not found: ID does not exist" containerID="fac89806857ac938a954c7e5c1269b31b93472b2515b59501914cf920cd995c3" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.692146 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac89806857ac938a954c7e5c1269b31b93472b2515b59501914cf920cd995c3"} err="failed to get container status \"fac89806857ac938a954c7e5c1269b31b93472b2515b59501914cf920cd995c3\": rpc error: code = NotFound desc = could not find container \"fac89806857ac938a954c7e5c1269b31b93472b2515b59501914cf920cd995c3\": container with ID starting with fac89806857ac938a954c7e5c1269b31b93472b2515b59501914cf920cd995c3 not found: ID does not exist" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.692173 4981 scope.go:117] "RemoveContainer" containerID="f60d550dfd389c6efc973d77297f6e1cf1ce9a1bcffd417d140edce3d9b363ac" Jan 28 15:09:50 crc kubenswrapper[4981]: E0128 15:09:50.692499 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f60d550dfd389c6efc973d77297f6e1cf1ce9a1bcffd417d140edce3d9b363ac\": container with ID starting with f60d550dfd389c6efc973d77297f6e1cf1ce9a1bcffd417d140edce3d9b363ac not found: ID does not exist" containerID="f60d550dfd389c6efc973d77297f6e1cf1ce9a1bcffd417d140edce3d9b363ac" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.692531 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f60d550dfd389c6efc973d77297f6e1cf1ce9a1bcffd417d140edce3d9b363ac"} err="failed to get container status \"f60d550dfd389c6efc973d77297f6e1cf1ce9a1bcffd417d140edce3d9b363ac\": rpc error: code = NotFound desc = could not find container \"f60d550dfd389c6efc973d77297f6e1cf1ce9a1bcffd417d140edce3d9b363ac\": container with ID starting with f60d550dfd389c6efc973d77297f6e1cf1ce9a1bcffd417d140edce3d9b363ac not found: ID does not exist" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.692551 4981 scope.go:117] "RemoveContainer" containerID="4fd51047e389a4ad226b82e520ca1486e074cc00078e47301625638874d5d73b" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.700277 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dqqn8"] Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.700709 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dqqn8"] Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.710409 4981 scope.go:117] "RemoveContainer" containerID="89e7039bbb5b979becf93ca82135aac580b5f1a05e50916af747edf9e4a8b7e3" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.727075 4981 scope.go:117] "RemoveContainer" containerID="6c1abee4818e6b86d573f42e34dd555c3aefe9ac6237f324c65349c1a86f7cc0" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.744466 4981 scope.go:117] "RemoveContainer" containerID="4fd51047e389a4ad226b82e520ca1486e074cc00078e47301625638874d5d73b" Jan 28 15:09:50 crc kubenswrapper[4981]: E0128 15:09:50.744852 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fd51047e389a4ad226b82e520ca1486e074cc00078e47301625638874d5d73b\": container with ID starting with 4fd51047e389a4ad226b82e520ca1486e074cc00078e47301625638874d5d73b not found: ID does not exist" containerID="4fd51047e389a4ad226b82e520ca1486e074cc00078e47301625638874d5d73b" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.744882 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fd51047e389a4ad226b82e520ca1486e074cc00078e47301625638874d5d73b"} err="failed to get container status \"4fd51047e389a4ad226b82e520ca1486e074cc00078e47301625638874d5d73b\": rpc error: code = NotFound desc = could not find container \"4fd51047e389a4ad226b82e520ca1486e074cc00078e47301625638874d5d73b\": container with ID starting with 4fd51047e389a4ad226b82e520ca1486e074cc00078e47301625638874d5d73b not found: ID does not exist" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.744903 4981 scope.go:117] "RemoveContainer" containerID="89e7039bbb5b979becf93ca82135aac580b5f1a05e50916af747edf9e4a8b7e3" Jan 28 15:09:50 crc kubenswrapper[4981]: E0128 15:09:50.745250 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89e7039bbb5b979becf93ca82135aac580b5f1a05e50916af747edf9e4a8b7e3\": container with ID starting with 89e7039bbb5b979becf93ca82135aac580b5f1a05e50916af747edf9e4a8b7e3 not found: ID does not exist" containerID="89e7039bbb5b979becf93ca82135aac580b5f1a05e50916af747edf9e4a8b7e3" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.745301 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89e7039bbb5b979becf93ca82135aac580b5f1a05e50916af747edf9e4a8b7e3"} err="failed to get container status \"89e7039bbb5b979becf93ca82135aac580b5f1a05e50916af747edf9e4a8b7e3\": rpc error: code = NotFound desc = could not find container \"89e7039bbb5b979becf93ca82135aac580b5f1a05e50916af747edf9e4a8b7e3\": container with ID starting with 89e7039bbb5b979becf93ca82135aac580b5f1a05e50916af747edf9e4a8b7e3 not found: ID does not exist" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.745332 4981 scope.go:117] "RemoveContainer" containerID="6c1abee4818e6b86d573f42e34dd555c3aefe9ac6237f324c65349c1a86f7cc0" Jan 28 15:09:50 crc kubenswrapper[4981]: E0128 15:09:50.745924 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c1abee4818e6b86d573f42e34dd555c3aefe9ac6237f324c65349c1a86f7cc0\": container with ID starting with 6c1abee4818e6b86d573f42e34dd555c3aefe9ac6237f324c65349c1a86f7cc0 not found: ID does not exist" containerID="6c1abee4818e6b86d573f42e34dd555c3aefe9ac6237f324c65349c1a86f7cc0" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.745949 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c1abee4818e6b86d573f42e34dd555c3aefe9ac6237f324c65349c1a86f7cc0"} err="failed to get container status \"6c1abee4818e6b86d573f42e34dd555c3aefe9ac6237f324c65349c1a86f7cc0\": rpc error: code = NotFound desc = could not find container \"6c1abee4818e6b86d573f42e34dd555c3aefe9ac6237f324c65349c1a86f7cc0\": container with ID starting with 6c1abee4818e6b86d573f42e34dd555c3aefe9ac6237f324c65349c1a86f7cc0 not found: ID does not exist" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.745963 4981 scope.go:117] "RemoveContainer" containerID="35a94a3b5c466b0a495713441cfa24da7be3a0af42454266ffcc92f01c1b11d8" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.758451 4981 scope.go:117] "RemoveContainer" containerID="fd2272b6334162cdbb08c76ff2b8b14ffeaa7a1fdb6d2a5262e8be7a3666e442" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.772705 4981 scope.go:117] "RemoveContainer" containerID="3e13f924a8264c00295f87280f363b1f94ad7aec189aa78a7c4447754f452cd7" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.785170 4981 scope.go:117] "RemoveContainer" containerID="35a94a3b5c466b0a495713441cfa24da7be3a0af42454266ffcc92f01c1b11d8" Jan 28 15:09:50 crc kubenswrapper[4981]: E0128 15:09:50.785569 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35a94a3b5c466b0a495713441cfa24da7be3a0af42454266ffcc92f01c1b11d8\": container with ID starting with 35a94a3b5c466b0a495713441cfa24da7be3a0af42454266ffcc92f01c1b11d8 not found: ID does not exist" containerID="35a94a3b5c466b0a495713441cfa24da7be3a0af42454266ffcc92f01c1b11d8" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.785605 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35a94a3b5c466b0a495713441cfa24da7be3a0af42454266ffcc92f01c1b11d8"} err="failed to get container status \"35a94a3b5c466b0a495713441cfa24da7be3a0af42454266ffcc92f01c1b11d8\": rpc error: code = NotFound desc = could not find container \"35a94a3b5c466b0a495713441cfa24da7be3a0af42454266ffcc92f01c1b11d8\": container with ID starting with 35a94a3b5c466b0a495713441cfa24da7be3a0af42454266ffcc92f01c1b11d8 not found: ID does not exist" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.785632 4981 scope.go:117] "RemoveContainer" containerID="fd2272b6334162cdbb08c76ff2b8b14ffeaa7a1fdb6d2a5262e8be7a3666e442" Jan 28 15:09:50 crc kubenswrapper[4981]: E0128 15:09:50.786884 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd2272b6334162cdbb08c76ff2b8b14ffeaa7a1fdb6d2a5262e8be7a3666e442\": container with ID starting with fd2272b6334162cdbb08c76ff2b8b14ffeaa7a1fdb6d2a5262e8be7a3666e442 not found: ID does not exist" containerID="fd2272b6334162cdbb08c76ff2b8b14ffeaa7a1fdb6d2a5262e8be7a3666e442" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.786913 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd2272b6334162cdbb08c76ff2b8b14ffeaa7a1fdb6d2a5262e8be7a3666e442"} err="failed to get container status \"fd2272b6334162cdbb08c76ff2b8b14ffeaa7a1fdb6d2a5262e8be7a3666e442\": rpc error: code = NotFound desc = could not find container \"fd2272b6334162cdbb08c76ff2b8b14ffeaa7a1fdb6d2a5262e8be7a3666e442\": container with ID starting with fd2272b6334162cdbb08c76ff2b8b14ffeaa7a1fdb6d2a5262e8be7a3666e442 not found: ID does not exist" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.786934 4981 scope.go:117] "RemoveContainer" containerID="3e13f924a8264c00295f87280f363b1f94ad7aec189aa78a7c4447754f452cd7" Jan 28 15:09:50 crc kubenswrapper[4981]: E0128 15:09:50.787222 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e13f924a8264c00295f87280f363b1f94ad7aec189aa78a7c4447754f452cd7\": container with ID starting with 3e13f924a8264c00295f87280f363b1f94ad7aec189aa78a7c4447754f452cd7 not found: ID does not exist" containerID="3e13f924a8264c00295f87280f363b1f94ad7aec189aa78a7c4447754f452cd7" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.787246 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e13f924a8264c00295f87280f363b1f94ad7aec189aa78a7c4447754f452cd7"} err="failed to get container status \"3e13f924a8264c00295f87280f363b1f94ad7aec189aa78a7c4447754f452cd7\": rpc error: code = NotFound desc = could not find container \"3e13f924a8264c00295f87280f363b1f94ad7aec189aa78a7c4447754f452cd7\": container with ID starting with 3e13f924a8264c00295f87280f363b1f94ad7aec189aa78a7c4447754f452cd7 not found: ID does not exist" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.787265 4981 scope.go:117] "RemoveContainer" containerID="214cb5050b07c68d3f285bda2e97e86b258f47d6a99ec8a4cf495965030c95b1" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.800234 4981 scope.go:117] "RemoveContainer" containerID="9aa33c129de91e465fc01a3c32a871c7dde565d83df54f7a9239fde0e26db89b" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.812381 4981 scope.go:117] "RemoveContainer" containerID="35ad42582ed39ee08835dcd416385b5186ae4fc38510834f5f11804f5802ee76" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.825938 4981 scope.go:117] "RemoveContainer" containerID="214cb5050b07c68d3f285bda2e97e86b258f47d6a99ec8a4cf495965030c95b1" Jan 28 15:09:50 crc kubenswrapper[4981]: E0128 15:09:50.826541 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"214cb5050b07c68d3f285bda2e97e86b258f47d6a99ec8a4cf495965030c95b1\": container with ID starting with 214cb5050b07c68d3f285bda2e97e86b258f47d6a99ec8a4cf495965030c95b1 not found: ID does not exist" containerID="214cb5050b07c68d3f285bda2e97e86b258f47d6a99ec8a4cf495965030c95b1" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.826659 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"214cb5050b07c68d3f285bda2e97e86b258f47d6a99ec8a4cf495965030c95b1"} err="failed to get container status \"214cb5050b07c68d3f285bda2e97e86b258f47d6a99ec8a4cf495965030c95b1\": rpc error: code = NotFound desc = could not find container \"214cb5050b07c68d3f285bda2e97e86b258f47d6a99ec8a4cf495965030c95b1\": container with ID starting with 214cb5050b07c68d3f285bda2e97e86b258f47d6a99ec8a4cf495965030c95b1 not found: ID does not exist" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.826738 4981 scope.go:117] "RemoveContainer" containerID="9aa33c129de91e465fc01a3c32a871c7dde565d83df54f7a9239fde0e26db89b" Jan 28 15:09:50 crc kubenswrapper[4981]: E0128 15:09:50.827069 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9aa33c129de91e465fc01a3c32a871c7dde565d83df54f7a9239fde0e26db89b\": container with ID starting with 9aa33c129de91e465fc01a3c32a871c7dde565d83df54f7a9239fde0e26db89b not found: ID does not exist" containerID="9aa33c129de91e465fc01a3c32a871c7dde565d83df54f7a9239fde0e26db89b" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.827095 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9aa33c129de91e465fc01a3c32a871c7dde565d83df54f7a9239fde0e26db89b"} err="failed to get container status \"9aa33c129de91e465fc01a3c32a871c7dde565d83df54f7a9239fde0e26db89b\": rpc error: code = NotFound desc = could not find container \"9aa33c129de91e465fc01a3c32a871c7dde565d83df54f7a9239fde0e26db89b\": container with ID starting with 9aa33c129de91e465fc01a3c32a871c7dde565d83df54f7a9239fde0e26db89b not found: ID does not exist" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.827116 4981 scope.go:117] "RemoveContainer" containerID="35ad42582ed39ee08835dcd416385b5186ae4fc38510834f5f11804f5802ee76" Jan 28 15:09:50 crc kubenswrapper[4981]: E0128 15:09:50.827558 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35ad42582ed39ee08835dcd416385b5186ae4fc38510834f5f11804f5802ee76\": container with ID starting with 35ad42582ed39ee08835dcd416385b5186ae4fc38510834f5f11804f5802ee76 not found: ID does not exist" containerID="35ad42582ed39ee08835dcd416385b5186ae4fc38510834f5f11804f5802ee76" Jan 28 15:09:50 crc kubenswrapper[4981]: I0128 15:09:50.827585 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ad42582ed39ee08835dcd416385b5186ae4fc38510834f5f11804f5802ee76"} err="failed to get container status \"35ad42582ed39ee08835dcd416385b5186ae4fc38510834f5f11804f5802ee76\": rpc error: code = NotFound desc = could not find container \"35ad42582ed39ee08835dcd416385b5186ae4fc38510834f5f11804f5802ee76\": container with ID starting with 35ad42582ed39ee08835dcd416385b5186ae4fc38510834f5f11804f5802ee76 not found: ID does not exist" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.331923 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" path="/var/lib/kubelet/pods/06d03aa9-a3ff-46c7-bafd-4666c5adf6c1/volumes" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.333496 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10fb71f7-ebd9-4ce7-91e8-cabe64948d68" path="/var/lib/kubelet/pods/10fb71f7-ebd9-4ce7-91e8-cabe64948d68/volumes" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.334886 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20cbd927-94b5-452e-a139-8d797dd4f4f7" path="/var/lib/kubelet/pods/20cbd927-94b5-452e-a139-8d797dd4f4f7/volumes" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.337055 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2889542-ffb9-4af8-8f77-ccfd601dec88" path="/var/lib/kubelet/pods/a2889542-ffb9-4af8-8f77-ccfd601dec88/volumes" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.338043 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1101605-52b4-4c83-9958-11c0fe93d5e3" path="/var/lib/kubelet/pods/b1101605-52b4-4c83-9958-11c0fe93d5e3/volumes" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.589935 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-49xjs" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.656799 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-njnj4"] Jan 28 15:09:51 crc kubenswrapper[4981]: E0128 15:09:51.657309 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fb71f7-ebd9-4ce7-91e8-cabe64948d68" containerName="registry-server" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.657409 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fb71f7-ebd9-4ce7-91e8-cabe64948d68" containerName="registry-server" Jan 28 15:09:51 crc kubenswrapper[4981]: E0128 15:09:51.657485 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fb71f7-ebd9-4ce7-91e8-cabe64948d68" containerName="extract-utilities" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.657540 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fb71f7-ebd9-4ce7-91e8-cabe64948d68" containerName="extract-utilities" Jan 28 15:09:51 crc kubenswrapper[4981]: E0128 15:09:51.657594 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1101605-52b4-4c83-9958-11c0fe93d5e3" containerName="registry-server" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.657661 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1101605-52b4-4c83-9958-11c0fe93d5e3" containerName="registry-server" Jan 28 15:09:51 crc kubenswrapper[4981]: E0128 15:09:51.657740 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1101605-52b4-4c83-9958-11c0fe93d5e3" containerName="extract-utilities" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.657819 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1101605-52b4-4c83-9958-11c0fe93d5e3" containerName="extract-utilities" Jan 28 15:09:51 crc kubenswrapper[4981]: E0128 15:09:51.657877 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10fb71f7-ebd9-4ce7-91e8-cabe64948d68" containerName="extract-content" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.657948 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="10fb71f7-ebd9-4ce7-91e8-cabe64948d68" containerName="extract-content" Jan 28 15:09:51 crc kubenswrapper[4981]: E0128 15:09:51.658028 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" containerName="extract-utilities" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.658107 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" containerName="extract-utilities" Jan 28 15:09:51 crc kubenswrapper[4981]: E0128 15:09:51.658207 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20cbd927-94b5-452e-a139-8d797dd4f4f7" containerName="extract-utilities" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.658306 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cbd927-94b5-452e-a139-8d797dd4f4f7" containerName="extract-utilities" Jan 28 15:09:51 crc kubenswrapper[4981]: E0128 15:09:51.658385 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2889542-ffb9-4af8-8f77-ccfd601dec88" containerName="marketplace-operator" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.658461 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2889542-ffb9-4af8-8f77-ccfd601dec88" containerName="marketplace-operator" Jan 28 15:09:51 crc kubenswrapper[4981]: E0128 15:09:51.658516 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20cbd927-94b5-452e-a139-8d797dd4f4f7" containerName="extract-content" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.658569 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cbd927-94b5-452e-a139-8d797dd4f4f7" containerName="extract-content" Jan 28 15:09:51 crc kubenswrapper[4981]: E0128 15:09:51.658632 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1101605-52b4-4c83-9958-11c0fe93d5e3" containerName="extract-content" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.658686 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1101605-52b4-4c83-9958-11c0fe93d5e3" containerName="extract-content" Jan 28 15:09:51 crc kubenswrapper[4981]: E0128 15:09:51.658749 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" containerName="registry-server" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.658807 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" containerName="registry-server" Jan 28 15:09:51 crc kubenswrapper[4981]: E0128 15:09:51.658867 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20cbd927-94b5-452e-a139-8d797dd4f4f7" containerName="registry-server" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.658919 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="20cbd927-94b5-452e-a139-8d797dd4f4f7" containerName="registry-server" Jan 28 15:09:51 crc kubenswrapper[4981]: E0128 15:09:51.658980 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2889542-ffb9-4af8-8f77-ccfd601dec88" containerName="marketplace-operator" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.659055 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2889542-ffb9-4af8-8f77-ccfd601dec88" containerName="marketplace-operator" Jan 28 15:09:51 crc kubenswrapper[4981]: E0128 15:09:51.659112 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" containerName="extract-content" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.659204 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" containerName="extract-content" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.659393 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1101605-52b4-4c83-9958-11c0fe93d5e3" containerName="registry-server" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.659467 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="20cbd927-94b5-452e-a139-8d797dd4f4f7" containerName="registry-server" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.659537 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2889542-ffb9-4af8-8f77-ccfd601dec88" containerName="marketplace-operator" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.659597 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="06d03aa9-a3ff-46c7-bafd-4666c5adf6c1" containerName="registry-server" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.659654 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2889542-ffb9-4af8-8f77-ccfd601dec88" containerName="marketplace-operator" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.659710 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="10fb71f7-ebd9-4ce7-91e8-cabe64948d68" containerName="registry-server" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.660585 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-njnj4" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.663318 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-njnj4"] Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.665599 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.758033 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6349813-dabe-4141-9e83-8d8a99458444-utilities\") pod \"redhat-marketplace-njnj4\" (UID: \"b6349813-dabe-4141-9e83-8d8a99458444\") " pod="openshift-marketplace/redhat-marketplace-njnj4" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.758496 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2gkl\" (UniqueName: \"kubernetes.io/projected/b6349813-dabe-4141-9e83-8d8a99458444-kube-api-access-v2gkl\") pod \"redhat-marketplace-njnj4\" (UID: \"b6349813-dabe-4141-9e83-8d8a99458444\") " pod="openshift-marketplace/redhat-marketplace-njnj4" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.758546 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6349813-dabe-4141-9e83-8d8a99458444-catalog-content\") pod \"redhat-marketplace-njnj4\" (UID: \"b6349813-dabe-4141-9e83-8d8a99458444\") " pod="openshift-marketplace/redhat-marketplace-njnj4" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.852574 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-79kcx"] Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.853911 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.856814 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.859350 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6349813-dabe-4141-9e83-8d8a99458444-utilities\") pod \"redhat-marketplace-njnj4\" (UID: \"b6349813-dabe-4141-9e83-8d8a99458444\") " pod="openshift-marketplace/redhat-marketplace-njnj4" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.859391 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2gkl\" (UniqueName: \"kubernetes.io/projected/b6349813-dabe-4141-9e83-8d8a99458444-kube-api-access-v2gkl\") pod \"redhat-marketplace-njnj4\" (UID: \"b6349813-dabe-4141-9e83-8d8a99458444\") " pod="openshift-marketplace/redhat-marketplace-njnj4" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.859440 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6349813-dabe-4141-9e83-8d8a99458444-catalog-content\") pod \"redhat-marketplace-njnj4\" (UID: \"b6349813-dabe-4141-9e83-8d8a99458444\") " pod="openshift-marketplace/redhat-marketplace-njnj4" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.860059 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6349813-dabe-4141-9e83-8d8a99458444-catalog-content\") pod \"redhat-marketplace-njnj4\" (UID: \"b6349813-dabe-4141-9e83-8d8a99458444\") " pod="openshift-marketplace/redhat-marketplace-njnj4" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.860064 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6349813-dabe-4141-9e83-8d8a99458444-utilities\") pod \"redhat-marketplace-njnj4\" (UID: \"b6349813-dabe-4141-9e83-8d8a99458444\") " pod="openshift-marketplace/redhat-marketplace-njnj4" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.861486 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-79kcx"] Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.889115 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2gkl\" (UniqueName: \"kubernetes.io/projected/b6349813-dabe-4141-9e83-8d8a99458444-kube-api-access-v2gkl\") pod \"redhat-marketplace-njnj4\" (UID: \"b6349813-dabe-4141-9e83-8d8a99458444\") " pod="openshift-marketplace/redhat-marketplace-njnj4" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.960372 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-catalog-content\") pod \"redhat-operators-79kcx\" (UID: \"4c7bbc77-2c0d-4685-8698-4244f09ca0f3\") " pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.960422 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-utilities\") pod \"redhat-operators-79kcx\" (UID: \"4c7bbc77-2c0d-4685-8698-4244f09ca0f3\") " pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.960447 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptbfr\" (UniqueName: \"kubernetes.io/projected/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-kube-api-access-ptbfr\") pod \"redhat-operators-79kcx\" (UID: \"4c7bbc77-2c0d-4685-8698-4244f09ca0f3\") " pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:09:51 crc kubenswrapper[4981]: I0128 15:09:51.987525 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-njnj4" Jan 28 15:09:52 crc kubenswrapper[4981]: I0128 15:09:52.062092 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-catalog-content\") pod \"redhat-operators-79kcx\" (UID: \"4c7bbc77-2c0d-4685-8698-4244f09ca0f3\") " pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:09:52 crc kubenswrapper[4981]: I0128 15:09:52.062378 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-utilities\") pod \"redhat-operators-79kcx\" (UID: \"4c7bbc77-2c0d-4685-8698-4244f09ca0f3\") " pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:09:52 crc kubenswrapper[4981]: I0128 15:09:52.062513 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptbfr\" (UniqueName: \"kubernetes.io/projected/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-kube-api-access-ptbfr\") pod \"redhat-operators-79kcx\" (UID: \"4c7bbc77-2c0d-4685-8698-4244f09ca0f3\") " pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:09:52 crc kubenswrapper[4981]: I0128 15:09:52.062804 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-catalog-content\") pod \"redhat-operators-79kcx\" (UID: \"4c7bbc77-2c0d-4685-8698-4244f09ca0f3\") " pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:09:52 crc kubenswrapper[4981]: I0128 15:09:52.062940 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-utilities\") pod \"redhat-operators-79kcx\" (UID: \"4c7bbc77-2c0d-4685-8698-4244f09ca0f3\") " pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:09:52 crc kubenswrapper[4981]: I0128 15:09:52.096011 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptbfr\" (UniqueName: \"kubernetes.io/projected/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-kube-api-access-ptbfr\") pod \"redhat-operators-79kcx\" (UID: \"4c7bbc77-2c0d-4685-8698-4244f09ca0f3\") " pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:09:52 crc kubenswrapper[4981]: I0128 15:09:52.169127 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:09:52 crc kubenswrapper[4981]: I0128 15:09:52.438563 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-njnj4"] Jan 28 15:09:52 crc kubenswrapper[4981]: W0128 15:09:52.449523 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6349813_dabe_4141_9e83_8d8a99458444.slice/crio-cf417e5820d00942ba3600c908ee323238f0daff9d90f0ae4a0e59eb6a367288 WatchSource:0}: Error finding container cf417e5820d00942ba3600c908ee323238f0daff9d90f0ae4a0e59eb6a367288: Status 404 returned error can't find the container with id cf417e5820d00942ba3600c908ee323238f0daff9d90f0ae4a0e59eb6a367288 Jan 28 15:09:52 crc kubenswrapper[4981]: I0128 15:09:52.531923 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-79kcx"] Jan 28 15:09:52 crc kubenswrapper[4981]: W0128 15:09:52.537712 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c7bbc77_2c0d_4685_8698_4244f09ca0f3.slice/crio-b7382b4753db506011b0bebe1d5be333d32335a26a382e2ae6b8baa92eaca4fd WatchSource:0}: Error finding container b7382b4753db506011b0bebe1d5be333d32335a26a382e2ae6b8baa92eaca4fd: Status 404 returned error can't find the container with id b7382b4753db506011b0bebe1d5be333d32335a26a382e2ae6b8baa92eaca4fd Jan 28 15:09:52 crc kubenswrapper[4981]: I0128 15:09:52.589889 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79kcx" event={"ID":"4c7bbc77-2c0d-4685-8698-4244f09ca0f3","Type":"ContainerStarted","Data":"b7382b4753db506011b0bebe1d5be333d32335a26a382e2ae6b8baa92eaca4fd"} Jan 28 15:09:52 crc kubenswrapper[4981]: I0128 15:09:52.591747 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-njnj4" event={"ID":"b6349813-dabe-4141-9e83-8d8a99458444","Type":"ContainerStarted","Data":"cf417e5820d00942ba3600c908ee323238f0daff9d90f0ae4a0e59eb6a367288"} Jan 28 15:09:53 crc kubenswrapper[4981]: I0128 15:09:53.596780 4981 generic.go:334] "Generic (PLEG): container finished" podID="4c7bbc77-2c0d-4685-8698-4244f09ca0f3" containerID="252034e9a84529bb750c481cd90ca8f2318f41a795d4480b6926c2c13baf86b3" exitCode=0 Jan 28 15:09:53 crc kubenswrapper[4981]: I0128 15:09:53.596868 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79kcx" event={"ID":"4c7bbc77-2c0d-4685-8698-4244f09ca0f3","Type":"ContainerDied","Data":"252034e9a84529bb750c481cd90ca8f2318f41a795d4480b6926c2c13baf86b3"} Jan 28 15:09:53 crc kubenswrapper[4981]: I0128 15:09:53.599006 4981 generic.go:334] "Generic (PLEG): container finished" podID="b6349813-dabe-4141-9e83-8d8a99458444" containerID="749812f6bb9d581d6a39189ec9e779fcfbba12880659bc03e71246f68f0a2635" exitCode=0 Jan 28 15:09:53 crc kubenswrapper[4981]: I0128 15:09:53.599048 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-njnj4" event={"ID":"b6349813-dabe-4141-9e83-8d8a99458444","Type":"ContainerDied","Data":"749812f6bb9d581d6a39189ec9e779fcfbba12880659bc03e71246f68f0a2635"} Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.045322 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bcv2k"] Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.046292 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bcv2k" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.048652 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.093167 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bcv2k"] Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.193799 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/386dca69-5d28-4d18-899e-7fd92d5eb6ad-catalog-content\") pod \"certified-operators-bcv2k\" (UID: \"386dca69-5d28-4d18-899e-7fd92d5eb6ad\") " pod="openshift-marketplace/certified-operators-bcv2k" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.193850 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/386dca69-5d28-4d18-899e-7fd92d5eb6ad-utilities\") pod \"certified-operators-bcv2k\" (UID: \"386dca69-5d28-4d18-899e-7fd92d5eb6ad\") " pod="openshift-marketplace/certified-operators-bcv2k" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.193930 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsbq9\" (UniqueName: \"kubernetes.io/projected/386dca69-5d28-4d18-899e-7fd92d5eb6ad-kube-api-access-jsbq9\") pod \"certified-operators-bcv2k\" (UID: \"386dca69-5d28-4d18-899e-7fd92d5eb6ad\") " pod="openshift-marketplace/certified-operators-bcv2k" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.246406 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c6hpb"] Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.247380 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6hpb" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.250790 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.269728 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6hpb"] Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.294795 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baebe073-e075-4f9f-98aa-d1fbe2e55934-catalog-content\") pod \"community-operators-c6hpb\" (UID: \"baebe073-e075-4f9f-98aa-d1fbe2e55934\") " pod="openshift-marketplace/community-operators-c6hpb" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.294897 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baebe073-e075-4f9f-98aa-d1fbe2e55934-utilities\") pod \"community-operators-c6hpb\" (UID: \"baebe073-e075-4f9f-98aa-d1fbe2e55934\") " pod="openshift-marketplace/community-operators-c6hpb" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.294977 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/386dca69-5d28-4d18-899e-7fd92d5eb6ad-catalog-content\") pod \"certified-operators-bcv2k\" (UID: \"386dca69-5d28-4d18-899e-7fd92d5eb6ad\") " pod="openshift-marketplace/certified-operators-bcv2k" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.295034 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/386dca69-5d28-4d18-899e-7fd92d5eb6ad-utilities\") pod \"certified-operators-bcv2k\" (UID: \"386dca69-5d28-4d18-899e-7fd92d5eb6ad\") " pod="openshift-marketplace/certified-operators-bcv2k" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.295084 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnpk6\" (UniqueName: \"kubernetes.io/projected/baebe073-e075-4f9f-98aa-d1fbe2e55934-kube-api-access-hnpk6\") pod \"community-operators-c6hpb\" (UID: \"baebe073-e075-4f9f-98aa-d1fbe2e55934\") " pod="openshift-marketplace/community-operators-c6hpb" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.295135 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsbq9\" (UniqueName: \"kubernetes.io/projected/386dca69-5d28-4d18-899e-7fd92d5eb6ad-kube-api-access-jsbq9\") pod \"certified-operators-bcv2k\" (UID: \"386dca69-5d28-4d18-899e-7fd92d5eb6ad\") " pod="openshift-marketplace/certified-operators-bcv2k" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.296993 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/386dca69-5d28-4d18-899e-7fd92d5eb6ad-catalog-content\") pod \"certified-operators-bcv2k\" (UID: \"386dca69-5d28-4d18-899e-7fd92d5eb6ad\") " pod="openshift-marketplace/certified-operators-bcv2k" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.298335 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/386dca69-5d28-4d18-899e-7fd92d5eb6ad-utilities\") pod \"certified-operators-bcv2k\" (UID: \"386dca69-5d28-4d18-899e-7fd92d5eb6ad\") " pod="openshift-marketplace/certified-operators-bcv2k" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.326004 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsbq9\" (UniqueName: \"kubernetes.io/projected/386dca69-5d28-4d18-899e-7fd92d5eb6ad-kube-api-access-jsbq9\") pod \"certified-operators-bcv2k\" (UID: \"386dca69-5d28-4d18-899e-7fd92d5eb6ad\") " pod="openshift-marketplace/certified-operators-bcv2k" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.382042 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bcv2k" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.396955 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnpk6\" (UniqueName: \"kubernetes.io/projected/baebe073-e075-4f9f-98aa-d1fbe2e55934-kube-api-access-hnpk6\") pod \"community-operators-c6hpb\" (UID: \"baebe073-e075-4f9f-98aa-d1fbe2e55934\") " pod="openshift-marketplace/community-operators-c6hpb" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.397086 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baebe073-e075-4f9f-98aa-d1fbe2e55934-catalog-content\") pod \"community-operators-c6hpb\" (UID: \"baebe073-e075-4f9f-98aa-d1fbe2e55934\") " pod="openshift-marketplace/community-operators-c6hpb" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.397170 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baebe073-e075-4f9f-98aa-d1fbe2e55934-utilities\") pod \"community-operators-c6hpb\" (UID: \"baebe073-e075-4f9f-98aa-d1fbe2e55934\") " pod="openshift-marketplace/community-operators-c6hpb" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.398350 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baebe073-e075-4f9f-98aa-d1fbe2e55934-catalog-content\") pod \"community-operators-c6hpb\" (UID: \"baebe073-e075-4f9f-98aa-d1fbe2e55934\") " pod="openshift-marketplace/community-operators-c6hpb" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.398475 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baebe073-e075-4f9f-98aa-d1fbe2e55934-utilities\") pod \"community-operators-c6hpb\" (UID: \"baebe073-e075-4f9f-98aa-d1fbe2e55934\") " pod="openshift-marketplace/community-operators-c6hpb" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.420982 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnpk6\" (UniqueName: \"kubernetes.io/projected/baebe073-e075-4f9f-98aa-d1fbe2e55934-kube-api-access-hnpk6\") pod \"community-operators-c6hpb\" (UID: \"baebe073-e075-4f9f-98aa-d1fbe2e55934\") " pod="openshift-marketplace/community-operators-c6hpb" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.568460 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c6hpb" Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.605963 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79kcx" event={"ID":"4c7bbc77-2c0d-4685-8698-4244f09ca0f3","Type":"ContainerStarted","Data":"204c2711dd6e54d763554ee48118d43fda5b3a5d12f6031f1456c69634085ec2"} Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.607793 4981 generic.go:334] "Generic (PLEG): container finished" podID="b6349813-dabe-4141-9e83-8d8a99458444" containerID="2adf162f8879545fc019e94ab9ee4a5d509bdab998887e32acaa0f8a1cfe5316" exitCode=0 Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.607830 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-njnj4" event={"ID":"b6349813-dabe-4141-9e83-8d8a99458444","Type":"ContainerDied","Data":"2adf162f8879545fc019e94ab9ee4a5d509bdab998887e32acaa0f8a1cfe5316"} Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.832739 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bcv2k"] Jan 28 15:09:54 crc kubenswrapper[4981]: I0128 15:09:54.976310 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c6hpb"] Jan 28 15:09:54 crc kubenswrapper[4981]: W0128 15:09:54.982831 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaebe073_e075_4f9f_98aa_d1fbe2e55934.slice/crio-c25716fe5471a982425f525cddc731ec5cbf416ae7cce6c8ec8ca591cf448ff0 WatchSource:0}: Error finding container c25716fe5471a982425f525cddc731ec5cbf416ae7cce6c8ec8ca591cf448ff0: Status 404 returned error can't find the container with id c25716fe5471a982425f525cddc731ec5cbf416ae7cce6c8ec8ca591cf448ff0 Jan 28 15:09:55 crc kubenswrapper[4981]: I0128 15:09:55.616705 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-njnj4" event={"ID":"b6349813-dabe-4141-9e83-8d8a99458444","Type":"ContainerStarted","Data":"7fcc67a1def93dc3b88a811be918310c0e53343f53d4fe11588a13bc76967f6f"} Jan 28 15:09:55 crc kubenswrapper[4981]: I0128 15:09:55.618610 4981 generic.go:334] "Generic (PLEG): container finished" podID="baebe073-e075-4f9f-98aa-d1fbe2e55934" containerID="05e946735bc508c363795b8562e2cbe60f14d28f45a33ea88effd4c2e27227e7" exitCode=0 Jan 28 15:09:55 crc kubenswrapper[4981]: I0128 15:09:55.618709 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6hpb" event={"ID":"baebe073-e075-4f9f-98aa-d1fbe2e55934","Type":"ContainerDied","Data":"05e946735bc508c363795b8562e2cbe60f14d28f45a33ea88effd4c2e27227e7"} Jan 28 15:09:55 crc kubenswrapper[4981]: I0128 15:09:55.618747 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6hpb" event={"ID":"baebe073-e075-4f9f-98aa-d1fbe2e55934","Type":"ContainerStarted","Data":"c25716fe5471a982425f525cddc731ec5cbf416ae7cce6c8ec8ca591cf448ff0"} Jan 28 15:09:55 crc kubenswrapper[4981]: I0128 15:09:55.621041 4981 generic.go:334] "Generic (PLEG): container finished" podID="386dca69-5d28-4d18-899e-7fd92d5eb6ad" containerID="72ead4459b136664249656f7bd5a2c1bca14ca75060231d7667055c0b2ddaa74" exitCode=0 Jan 28 15:09:55 crc kubenswrapper[4981]: I0128 15:09:55.621125 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcv2k" event={"ID":"386dca69-5d28-4d18-899e-7fd92d5eb6ad","Type":"ContainerDied","Data":"72ead4459b136664249656f7bd5a2c1bca14ca75060231d7667055c0b2ddaa74"} Jan 28 15:09:55 crc kubenswrapper[4981]: I0128 15:09:55.621161 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcv2k" event={"ID":"386dca69-5d28-4d18-899e-7fd92d5eb6ad","Type":"ContainerStarted","Data":"2aba95562a1329c740f55790b76ea9b245ad492f78212cf96e256e1355a7f002"} Jan 28 15:09:55 crc kubenswrapper[4981]: I0128 15:09:55.624833 4981 generic.go:334] "Generic (PLEG): container finished" podID="4c7bbc77-2c0d-4685-8698-4244f09ca0f3" containerID="204c2711dd6e54d763554ee48118d43fda5b3a5d12f6031f1456c69634085ec2" exitCode=0 Jan 28 15:09:55 crc kubenswrapper[4981]: I0128 15:09:55.624870 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79kcx" event={"ID":"4c7bbc77-2c0d-4685-8698-4244f09ca0f3","Type":"ContainerDied","Data":"204c2711dd6e54d763554ee48118d43fda5b3a5d12f6031f1456c69634085ec2"} Jan 28 15:09:55 crc kubenswrapper[4981]: I0128 15:09:55.642286 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-njnj4" podStartSLOduration=3.239450242 podStartE2EDuration="4.642264073s" podCreationTimestamp="2026-01-28 15:09:51 +0000 UTC" firstStartedPulling="2026-01-28 15:09:53.600125624 +0000 UTC m=+405.052283865" lastFinishedPulling="2026-01-28 15:09:55.002939455 +0000 UTC m=+406.455097696" observedRunningTime="2026-01-28 15:09:55.640959106 +0000 UTC m=+407.093117347" watchObservedRunningTime="2026-01-28 15:09:55.642264073 +0000 UTC m=+407.094422324" Jan 28 15:09:56 crc kubenswrapper[4981]: I0128 15:09:56.653772 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6hpb" event={"ID":"baebe073-e075-4f9f-98aa-d1fbe2e55934","Type":"ContainerStarted","Data":"0c0a646ce353410bf627c6adbdad6d82e8f607546e13751ab183c2aacbc33be1"} Jan 28 15:09:56 crc kubenswrapper[4981]: I0128 15:09:56.662814 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcv2k" event={"ID":"386dca69-5d28-4d18-899e-7fd92d5eb6ad","Type":"ContainerStarted","Data":"763fd855b5177b99dfaa638d05a826e865f1be84bb1a0b9ed31c74c6cc57ba04"} Jan 28 15:09:56 crc kubenswrapper[4981]: I0128 15:09:56.667683 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79kcx" event={"ID":"4c7bbc77-2c0d-4685-8698-4244f09ca0f3","Type":"ContainerStarted","Data":"965dafa183c949e5730e57b45732b3967580a0840168912f1310e235ab73e9d5"} Jan 28 15:09:56 crc kubenswrapper[4981]: I0128 15:09:56.706375 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-79kcx" podStartSLOduration=3.188990896 podStartE2EDuration="5.706352627s" podCreationTimestamp="2026-01-28 15:09:51 +0000 UTC" firstStartedPulling="2026-01-28 15:09:53.600197756 +0000 UTC m=+405.052355997" lastFinishedPulling="2026-01-28 15:09:56.117559487 +0000 UTC m=+407.569717728" observedRunningTime="2026-01-28 15:09:56.704842625 +0000 UTC m=+408.157000866" watchObservedRunningTime="2026-01-28 15:09:56.706352627 +0000 UTC m=+408.158510858" Jan 28 15:09:57 crc kubenswrapper[4981]: I0128 15:09:57.673312 4981 generic.go:334] "Generic (PLEG): container finished" podID="baebe073-e075-4f9f-98aa-d1fbe2e55934" containerID="0c0a646ce353410bf627c6adbdad6d82e8f607546e13751ab183c2aacbc33be1" exitCode=0 Jan 28 15:09:57 crc kubenswrapper[4981]: I0128 15:09:57.673376 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6hpb" event={"ID":"baebe073-e075-4f9f-98aa-d1fbe2e55934","Type":"ContainerDied","Data":"0c0a646ce353410bf627c6adbdad6d82e8f607546e13751ab183c2aacbc33be1"} Jan 28 15:09:57 crc kubenswrapper[4981]: I0128 15:09:57.674744 4981 generic.go:334] "Generic (PLEG): container finished" podID="386dca69-5d28-4d18-899e-7fd92d5eb6ad" containerID="763fd855b5177b99dfaa638d05a826e865f1be84bb1a0b9ed31c74c6cc57ba04" exitCode=0 Jan 28 15:09:57 crc kubenswrapper[4981]: I0128 15:09:57.674821 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcv2k" event={"ID":"386dca69-5d28-4d18-899e-7fd92d5eb6ad","Type":"ContainerDied","Data":"763fd855b5177b99dfaa638d05a826e865f1be84bb1a0b9ed31c74c6cc57ba04"} Jan 28 15:09:58 crc kubenswrapper[4981]: I0128 15:09:58.683010 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c6hpb" event={"ID":"baebe073-e075-4f9f-98aa-d1fbe2e55934","Type":"ContainerStarted","Data":"0d8baed3ff6426f5aace16f4bbb23c97f48122a22fabc08438e518c12a7bded7"} Jan 28 15:09:58 crc kubenswrapper[4981]: I0128 15:09:58.686617 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bcv2k" event={"ID":"386dca69-5d28-4d18-899e-7fd92d5eb6ad","Type":"ContainerStarted","Data":"82c1a21d07aaebb14fa255023a702994d61e00cbd9040d1042742a0acff3d3d7"} Jan 28 15:09:58 crc kubenswrapper[4981]: I0128 15:09:58.704985 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c6hpb" podStartSLOduration=2.185702327 podStartE2EDuration="4.704961183s" podCreationTimestamp="2026-01-28 15:09:54 +0000 UTC" firstStartedPulling="2026-01-28 15:09:55.620631699 +0000 UTC m=+407.072789940" lastFinishedPulling="2026-01-28 15:09:58.139890555 +0000 UTC m=+409.592048796" observedRunningTime="2026-01-28 15:09:58.703099951 +0000 UTC m=+410.155258202" watchObservedRunningTime="2026-01-28 15:09:58.704961183 +0000 UTC m=+410.157119434" Jan 28 15:09:58 crc kubenswrapper[4981]: I0128 15:09:58.723764 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bcv2k" podStartSLOduration=2.022740503 podStartE2EDuration="4.723743327s" podCreationTimestamp="2026-01-28 15:09:54 +0000 UTC" firstStartedPulling="2026-01-28 15:09:55.622534642 +0000 UTC m=+407.074692883" lastFinishedPulling="2026-01-28 15:09:58.323537466 +0000 UTC m=+409.775695707" observedRunningTime="2026-01-28 15:09:58.721123604 +0000 UTC m=+410.173281855" watchObservedRunningTime="2026-01-28 15:09:58.723743327 +0000 UTC m=+410.175901568" Jan 28 15:09:59 crc kubenswrapper[4981]: I0128 15:09:59.329596 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-ltxqv" Jan 28 15:09:59 crc kubenswrapper[4981]: I0128 15:09:59.418868 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vr269"] Jan 28 15:10:01 crc kubenswrapper[4981]: I0128 15:10:01.988669 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-njnj4" Jan 28 15:10:01 crc kubenswrapper[4981]: I0128 15:10:01.989126 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-njnj4" Jan 28 15:10:02 crc kubenswrapper[4981]: I0128 15:10:02.061133 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-njnj4" Jan 28 15:10:02 crc kubenswrapper[4981]: I0128 15:10:02.169934 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:10:02 crc kubenswrapper[4981]: I0128 15:10:02.169975 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:10:02 crc kubenswrapper[4981]: I0128 15:10:02.769022 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-njnj4" Jan 28 15:10:03 crc kubenswrapper[4981]: I0128 15:10:03.210898 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-79kcx" podUID="4c7bbc77-2c0d-4685-8698-4244f09ca0f3" containerName="registry-server" probeResult="failure" output=< Jan 28 15:10:03 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Jan 28 15:10:03 crc kubenswrapper[4981]: > Jan 28 15:10:04 crc kubenswrapper[4981]: I0128 15:10:04.383479 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bcv2k" Jan 28 15:10:04 crc kubenswrapper[4981]: I0128 15:10:04.384236 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bcv2k" Jan 28 15:10:04 crc kubenswrapper[4981]: I0128 15:10:04.443403 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bcv2k" Jan 28 15:10:04 crc kubenswrapper[4981]: I0128 15:10:04.568852 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c6hpb" Jan 28 15:10:04 crc kubenswrapper[4981]: I0128 15:10:04.569356 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c6hpb" Jan 28 15:10:04 crc kubenswrapper[4981]: I0128 15:10:04.609109 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c6hpb" Jan 28 15:10:04 crc kubenswrapper[4981]: I0128 15:10:04.756382 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c6hpb" Jan 28 15:10:04 crc kubenswrapper[4981]: I0128 15:10:04.760819 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bcv2k" Jan 28 15:10:12 crc kubenswrapper[4981]: I0128 15:10:12.236876 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:10:12 crc kubenswrapper[4981]: I0128 15:10:12.288996 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:10:19 crc kubenswrapper[4981]: I0128 15:10:19.897652 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:10:19 crc kubenswrapper[4981]: I0128 15:10:19.898029 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:10:19 crc kubenswrapper[4981]: I0128 15:10:19.898093 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:10:19 crc kubenswrapper[4981]: I0128 15:10:19.898856 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a64a7e643e94a4cd76d766a4c3449bca81854c0a93bd4d8a5c7f5aa7c7eb50b8"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:10:19 crc kubenswrapper[4981]: I0128 15:10:19.898952 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://a64a7e643e94a4cd76d766a4c3449bca81854c0a93bd4d8a5c7f5aa7c7eb50b8" gracePeriod=600 Jan 28 15:10:20 crc kubenswrapper[4981]: I0128 15:10:20.815337 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="a64a7e643e94a4cd76d766a4c3449bca81854c0a93bd4d8a5c7f5aa7c7eb50b8" exitCode=0 Jan 28 15:10:20 crc kubenswrapper[4981]: I0128 15:10:20.815435 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"a64a7e643e94a4cd76d766a4c3449bca81854c0a93bd4d8a5c7f5aa7c7eb50b8"} Jan 28 15:10:20 crc kubenswrapper[4981]: I0128 15:10:20.815888 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"d03207bd7d360434e69cdf83589709537b56f9611d1c6a12671a9b7de643ea90"} Jan 28 15:10:20 crc kubenswrapper[4981]: I0128 15:10:20.815913 4981 scope.go:117] "RemoveContainer" containerID="a19502d178be0814c8e08076d91acadc27c4b39198d597f70863a52a0d500dd6" Jan 28 15:10:24 crc kubenswrapper[4981]: I0128 15:10:24.469921 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-vr269" podUID="fa35bf3f-51fc-43b8-8e38-ed5c88a362f7" containerName="registry" containerID="cri-o://e7dd630e086659897d42ca67b7ed5e04e4fb05246aa2c44aabf9c3070a51a582" gracePeriod=30 Jan 28 15:10:24 crc kubenswrapper[4981]: I0128 15:10:24.853414 4981 generic.go:334] "Generic (PLEG): container finished" podID="fa35bf3f-51fc-43b8-8e38-ed5c88a362f7" containerID="e7dd630e086659897d42ca67b7ed5e04e4fb05246aa2c44aabf9c3070a51a582" exitCode=0 Jan 28 15:10:24 crc kubenswrapper[4981]: I0128 15:10:24.853752 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vr269" event={"ID":"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7","Type":"ContainerDied","Data":"e7dd630e086659897d42ca67b7ed5e04e4fb05246aa2c44aabf9c3070a51a582"} Jan 28 15:10:24 crc kubenswrapper[4981]: I0128 15:10:24.980366 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.065114 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-installation-pull-secrets\") pod \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.065159 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-registry-certificates\") pod \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.065311 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-ca-trust-extracted\") pod \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.065358 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-trusted-ca\") pod \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.065382 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-bound-sa-token\") pod \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.065406 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-registry-tls\") pod \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.065585 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.065617 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwhxz\" (UniqueName: \"kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-kube-api-access-dwhxz\") pod \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\" (UID: \"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7\") " Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.066695 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.066747 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.071072 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.071680 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.073311 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-kube-api-access-dwhxz" (OuterVolumeSpecName: "kube-api-access-dwhxz") pod "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7"). InnerVolumeSpecName "kube-api-access-dwhxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.074032 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.082363 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.090831 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7" (UID: "fa35bf3f-51fc-43b8-8e38-ed5c88a362f7"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.166823 4981 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.166860 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.166872 4981 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.166882 4981 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.166894 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwhxz\" (UniqueName: \"kubernetes.io/projected/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-kube-api-access-dwhxz\") on node \"crc\" DevicePath \"\"" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.166908 4981 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.166920 4981 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.863344 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vr269" event={"ID":"fa35bf3f-51fc-43b8-8e38-ed5c88a362f7","Type":"ContainerDied","Data":"ab9c896bcd9886899ccd202be181525b7228310f05e1e26c35e71ed32d1dd11f"} Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.863445 4981 scope.go:117] "RemoveContainer" containerID="e7dd630e086659897d42ca67b7ed5e04e4fb05246aa2c44aabf9c3070a51a582" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.863483 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vr269" Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.894148 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vr269"] Jan 28 15:10:25 crc kubenswrapper[4981]: I0128 15:10:25.904886 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vr269"] Jan 28 15:10:27 crc kubenswrapper[4981]: I0128 15:10:27.330825 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa35bf3f-51fc-43b8-8e38-ed5c88a362f7" path="/var/lib/kubelet/pods/fa35bf3f-51fc-43b8-8e38-ed5c88a362f7/volumes" Jan 28 15:12:49 crc kubenswrapper[4981]: I0128 15:12:49.897985 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:12:49 crc kubenswrapper[4981]: I0128 15:12:49.898904 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:13:19 crc kubenswrapper[4981]: I0128 15:13:19.898124 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:13:19 crc kubenswrapper[4981]: I0128 15:13:19.898797 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:13:49 crc kubenswrapper[4981]: I0128 15:13:49.898085 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:13:49 crc kubenswrapper[4981]: I0128 15:13:49.898789 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:13:49 crc kubenswrapper[4981]: I0128 15:13:49.898842 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:13:49 crc kubenswrapper[4981]: I0128 15:13:49.899501 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d03207bd7d360434e69cdf83589709537b56f9611d1c6a12671a9b7de643ea90"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:13:49 crc kubenswrapper[4981]: I0128 15:13:49.899577 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://d03207bd7d360434e69cdf83589709537b56f9611d1c6a12671a9b7de643ea90" gracePeriod=600 Jan 28 15:13:50 crc kubenswrapper[4981]: I0128 15:13:50.285629 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="d03207bd7d360434e69cdf83589709537b56f9611d1c6a12671a9b7de643ea90" exitCode=0 Jan 28 15:13:50 crc kubenswrapper[4981]: I0128 15:13:50.285692 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"d03207bd7d360434e69cdf83589709537b56f9611d1c6a12671a9b7de643ea90"} Jan 28 15:13:50 crc kubenswrapper[4981]: I0128 15:13:50.286015 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"c69a7071dbf3ec3f1115d8a9515e0de8b513ecd90cb4130db9534e4ea3ba8dac"} Jan 28 15:13:50 crc kubenswrapper[4981]: I0128 15:13:50.286043 4981 scope.go:117] "RemoveContainer" containerID="a64a7e643e94a4cd76d766a4c3449bca81854c0a93bd4d8a5c7f5aa7c7eb50b8" Jan 28 15:14:09 crc kubenswrapper[4981]: I0128 15:14:09.543337 4981 scope.go:117] "RemoveContainer" containerID="e3d53e30a8fb7aa5c3d8c1dc47b1f770c3b4d4536e7fd00bf9223394426397c3" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.219171 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lzqwx"] Jan 28 15:14:55 crc kubenswrapper[4981]: E0128 15:14:55.221374 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa35bf3f-51fc-43b8-8e38-ed5c88a362f7" containerName="registry" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.221518 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa35bf3f-51fc-43b8-8e38-ed5c88a362f7" containerName="registry" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.221782 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa35bf3f-51fc-43b8-8e38-ed5c88a362f7" containerName="registry" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.222599 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lzqwx" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.231928 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.232072 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.232232 4981 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-zmvxx" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.233893 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lzqwx"] Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.242599 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-m9hjc"] Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.243322 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-m9hjc" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.245173 4981 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-28pjb" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.247629 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-rds8t"] Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.248209 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rds8t" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.258294 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-m9hjc"] Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.258643 4981 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-52wtt" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.261491 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-rds8t"] Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.311027 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57kbz\" (UniqueName: \"kubernetes.io/projected/499b468f-8150-49ce-9ec6-964f94f1234d-kube-api-access-57kbz\") pod \"cert-manager-webhook-687f57d79b-m9hjc\" (UID: \"499b468f-8150-49ce-9ec6-964f94f1234d\") " pod="cert-manager/cert-manager-webhook-687f57d79b-m9hjc" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.311090 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tj7c\" (UniqueName: \"kubernetes.io/projected/531e0ce3-f8d2-423f-8934-5427dca677c8-kube-api-access-6tj7c\") pod \"cert-manager-cainjector-cf98fcc89-lzqwx\" (UID: \"531e0ce3-f8d2-423f-8934-5427dca677c8\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lzqwx" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.311314 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx42h\" (UniqueName: \"kubernetes.io/projected/275dc545-afa7-4d22-9c2e-bc41e21e187f-kube-api-access-xx42h\") pod \"cert-manager-858654f9db-rds8t\" (UID: \"275dc545-afa7-4d22-9c2e-bc41e21e187f\") " pod="cert-manager/cert-manager-858654f9db-rds8t" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.413796 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57kbz\" (UniqueName: \"kubernetes.io/projected/499b468f-8150-49ce-9ec6-964f94f1234d-kube-api-access-57kbz\") pod \"cert-manager-webhook-687f57d79b-m9hjc\" (UID: \"499b468f-8150-49ce-9ec6-964f94f1234d\") " pod="cert-manager/cert-manager-webhook-687f57d79b-m9hjc" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.413868 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tj7c\" (UniqueName: \"kubernetes.io/projected/531e0ce3-f8d2-423f-8934-5427dca677c8-kube-api-access-6tj7c\") pod \"cert-manager-cainjector-cf98fcc89-lzqwx\" (UID: \"531e0ce3-f8d2-423f-8934-5427dca677c8\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lzqwx" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.414735 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx42h\" (UniqueName: \"kubernetes.io/projected/275dc545-afa7-4d22-9c2e-bc41e21e187f-kube-api-access-xx42h\") pod \"cert-manager-858654f9db-rds8t\" (UID: \"275dc545-afa7-4d22-9c2e-bc41e21e187f\") " pod="cert-manager/cert-manager-858654f9db-rds8t" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.436773 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx42h\" (UniqueName: \"kubernetes.io/projected/275dc545-afa7-4d22-9c2e-bc41e21e187f-kube-api-access-xx42h\") pod \"cert-manager-858654f9db-rds8t\" (UID: \"275dc545-afa7-4d22-9c2e-bc41e21e187f\") " pod="cert-manager/cert-manager-858654f9db-rds8t" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.436773 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tj7c\" (UniqueName: \"kubernetes.io/projected/531e0ce3-f8d2-423f-8934-5427dca677c8-kube-api-access-6tj7c\") pod \"cert-manager-cainjector-cf98fcc89-lzqwx\" (UID: \"531e0ce3-f8d2-423f-8934-5427dca677c8\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-lzqwx" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.441448 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57kbz\" (UniqueName: \"kubernetes.io/projected/499b468f-8150-49ce-9ec6-964f94f1234d-kube-api-access-57kbz\") pod \"cert-manager-webhook-687f57d79b-m9hjc\" (UID: \"499b468f-8150-49ce-9ec6-964f94f1234d\") " pod="cert-manager/cert-manager-webhook-687f57d79b-m9hjc" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.541212 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lzqwx" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.557935 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-m9hjc" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.564476 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rds8t" Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.784129 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-lzqwx"] Jan 28 15:14:55 crc kubenswrapper[4981]: I0128 15:14:55.799293 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:14:56 crc kubenswrapper[4981]: I0128 15:14:56.037998 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-rds8t"] Jan 28 15:14:56 crc kubenswrapper[4981]: W0128 15:14:56.039087 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod275dc545_afa7_4d22_9c2e_bc41e21e187f.slice/crio-f5961fc5f155e6ac60213a3419dc569a318a3b23877cdd68f67e5bdc804f024b WatchSource:0}: Error finding container f5961fc5f155e6ac60213a3419dc569a318a3b23877cdd68f67e5bdc804f024b: Status 404 returned error can't find the container with id f5961fc5f155e6ac60213a3419dc569a318a3b23877cdd68f67e5bdc804f024b Jan 28 15:14:56 crc kubenswrapper[4981]: I0128 15:14:56.045147 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-m9hjc"] Jan 28 15:14:56 crc kubenswrapper[4981]: W0128 15:14:56.051253 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod499b468f_8150_49ce_9ec6_964f94f1234d.slice/crio-f50dc681b6b75da27ac1b392baebd2cd3c450f6d10db4303ad69c3e974269d91 WatchSource:0}: Error finding container f50dc681b6b75da27ac1b392baebd2cd3c450f6d10db4303ad69c3e974269d91: Status 404 returned error can't find the container with id f50dc681b6b75da27ac1b392baebd2cd3c450f6d10db4303ad69c3e974269d91 Jan 28 15:14:56 crc kubenswrapper[4981]: I0128 15:14:56.743350 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-m9hjc" event={"ID":"499b468f-8150-49ce-9ec6-964f94f1234d","Type":"ContainerStarted","Data":"f50dc681b6b75da27ac1b392baebd2cd3c450f6d10db4303ad69c3e974269d91"} Jan 28 15:14:56 crc kubenswrapper[4981]: I0128 15:14:56.746784 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-rds8t" event={"ID":"275dc545-afa7-4d22-9c2e-bc41e21e187f","Type":"ContainerStarted","Data":"f5961fc5f155e6ac60213a3419dc569a318a3b23877cdd68f67e5bdc804f024b"} Jan 28 15:14:56 crc kubenswrapper[4981]: I0128 15:14:56.748061 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lzqwx" event={"ID":"531e0ce3-f8d2-423f-8934-5427dca677c8","Type":"ContainerStarted","Data":"e90141e695cc2dc8c5c2196abc5addf7d34f3044ba8e268a10138f33d50c0eee"} Jan 28 15:14:58 crc kubenswrapper[4981]: I0128 15:14:58.763203 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lzqwx" event={"ID":"531e0ce3-f8d2-423f-8934-5427dca677c8","Type":"ContainerStarted","Data":"af80d4769ec81b92ab70f5c4ada12ba54a37ad51a89e9aa67505f0b0c8cfe2d1"} Jan 28 15:14:58 crc kubenswrapper[4981]: I0128 15:14:58.777699 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-lzqwx" podStartSLOduration=1.358826103 podStartE2EDuration="3.777678353s" podCreationTimestamp="2026-01-28 15:14:55 +0000 UTC" firstStartedPulling="2026-01-28 15:14:55.799070653 +0000 UTC m=+707.251228894" lastFinishedPulling="2026-01-28 15:14:58.217922893 +0000 UTC m=+709.670081144" observedRunningTime="2026-01-28 15:14:58.774752295 +0000 UTC m=+710.226910536" watchObservedRunningTime="2026-01-28 15:14:58.777678353 +0000 UTC m=+710.229836594" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.170720 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99"] Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.172204 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.176627 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.176651 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.178063 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99"] Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.276889 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2kz9\" (UniqueName: \"kubernetes.io/projected/2df18abc-5d9b-447b-997c-2e60e4f85bad-kube-api-access-z2kz9\") pod \"collect-profiles-29493555-x5v99\" (UID: \"2df18abc-5d9b-447b-997c-2e60e4f85bad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.277117 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2df18abc-5d9b-447b-997c-2e60e4f85bad-secret-volume\") pod \"collect-profiles-29493555-x5v99\" (UID: \"2df18abc-5d9b-447b-997c-2e60e4f85bad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.277245 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2df18abc-5d9b-447b-997c-2e60e4f85bad-config-volume\") pod \"collect-profiles-29493555-x5v99\" (UID: \"2df18abc-5d9b-447b-997c-2e60e4f85bad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.377925 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2df18abc-5d9b-447b-997c-2e60e4f85bad-secret-volume\") pod \"collect-profiles-29493555-x5v99\" (UID: \"2df18abc-5d9b-447b-997c-2e60e4f85bad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.377979 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2df18abc-5d9b-447b-997c-2e60e4f85bad-config-volume\") pod \"collect-profiles-29493555-x5v99\" (UID: \"2df18abc-5d9b-447b-997c-2e60e4f85bad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.378041 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2kz9\" (UniqueName: \"kubernetes.io/projected/2df18abc-5d9b-447b-997c-2e60e4f85bad-kube-api-access-z2kz9\") pod \"collect-profiles-29493555-x5v99\" (UID: \"2df18abc-5d9b-447b-997c-2e60e4f85bad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.379554 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2df18abc-5d9b-447b-997c-2e60e4f85bad-config-volume\") pod \"collect-profiles-29493555-x5v99\" (UID: \"2df18abc-5d9b-447b-997c-2e60e4f85bad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.384791 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2df18abc-5d9b-447b-997c-2e60e4f85bad-secret-volume\") pod \"collect-profiles-29493555-x5v99\" (UID: \"2df18abc-5d9b-447b-997c-2e60e4f85bad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.399421 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2kz9\" (UniqueName: \"kubernetes.io/projected/2df18abc-5d9b-447b-997c-2e60e4f85bad-kube-api-access-z2kz9\") pod \"collect-profiles-29493555-x5v99\" (UID: \"2df18abc-5d9b-447b-997c-2e60e4f85bad\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.536261 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.742831 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99"] Jan 28 15:15:00 crc kubenswrapper[4981]: W0128 15:15:00.747301 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2df18abc_5d9b_447b_997c_2e60e4f85bad.slice/crio-1ab51323a677e34fbbe009d3d0bc2e9937a448f9a07d062f24dbd53cc6985e75 WatchSource:0}: Error finding container 1ab51323a677e34fbbe009d3d0bc2e9937a448f9a07d062f24dbd53cc6985e75: Status 404 returned error can't find the container with id 1ab51323a677e34fbbe009d3d0bc2e9937a448f9a07d062f24dbd53cc6985e75 Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.776542 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-rds8t" event={"ID":"275dc545-afa7-4d22-9c2e-bc41e21e187f","Type":"ContainerStarted","Data":"7dcf883355cc272992df12798232d51bd853da87461c218fab90f63107ba4735"} Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.777455 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-m9hjc" event={"ID":"499b468f-8150-49ce-9ec6-964f94f1234d","Type":"ContainerStarted","Data":"4d0ba07e10d670e72bd98b49c8f0ac4aa48ed63c3943ff7af933e1801cacba8c"} Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.777880 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-m9hjc" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.779005 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" event={"ID":"2df18abc-5d9b-447b-997c-2e60e4f85bad","Type":"ContainerStarted","Data":"1ab51323a677e34fbbe009d3d0bc2e9937a448f9a07d062f24dbd53cc6985e75"} Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.793931 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-rds8t" podStartSLOduration=1.736946286 podStartE2EDuration="5.793909782s" podCreationTimestamp="2026-01-28 15:14:55 +0000 UTC" firstStartedPulling="2026-01-28 15:14:56.042414466 +0000 UTC m=+707.494572707" lastFinishedPulling="2026-01-28 15:15:00.099377952 +0000 UTC m=+711.551536203" observedRunningTime="2026-01-28 15:15:00.790835239 +0000 UTC m=+712.242993510" watchObservedRunningTime="2026-01-28 15:15:00.793909782 +0000 UTC m=+712.246068023" Jan 28 15:15:00 crc kubenswrapper[4981]: I0128 15:15:00.847610 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-m9hjc" podStartSLOduration=1.7962733690000001 podStartE2EDuration="5.847586543s" podCreationTimestamp="2026-01-28 15:14:55 +0000 UTC" firstStartedPulling="2026-01-28 15:14:56.054432729 +0000 UTC m=+707.506590970" lastFinishedPulling="2026-01-28 15:15:00.105745893 +0000 UTC m=+711.557904144" observedRunningTime="2026-01-28 15:15:00.846799352 +0000 UTC m=+712.298957593" watchObservedRunningTime="2026-01-28 15:15:00.847586543 +0000 UTC m=+712.299744784" Jan 28 15:15:01 crc kubenswrapper[4981]: I0128 15:15:01.788071 4981 generic.go:334] "Generic (PLEG): container finished" podID="2df18abc-5d9b-447b-997c-2e60e4f85bad" containerID="a648c4db1087e59e7c8821bce4e290ab7934c2d7f20949c2e830960e8beb5783" exitCode=0 Jan 28 15:15:01 crc kubenswrapper[4981]: I0128 15:15:01.788164 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" event={"ID":"2df18abc-5d9b-447b-997c-2e60e4f85bad","Type":"ContainerDied","Data":"a648c4db1087e59e7c8821bce4e290ab7934c2d7f20949c2e830960e8beb5783"} Jan 28 15:15:03 crc kubenswrapper[4981]: I0128 15:15:03.077513 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" Jan 28 15:15:03 crc kubenswrapper[4981]: I0128 15:15:03.209611 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2df18abc-5d9b-447b-997c-2e60e4f85bad-secret-volume\") pod \"2df18abc-5d9b-447b-997c-2e60e4f85bad\" (UID: \"2df18abc-5d9b-447b-997c-2e60e4f85bad\") " Jan 28 15:15:03 crc kubenswrapper[4981]: I0128 15:15:03.209690 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2kz9\" (UniqueName: \"kubernetes.io/projected/2df18abc-5d9b-447b-997c-2e60e4f85bad-kube-api-access-z2kz9\") pod \"2df18abc-5d9b-447b-997c-2e60e4f85bad\" (UID: \"2df18abc-5d9b-447b-997c-2e60e4f85bad\") " Jan 28 15:15:03 crc kubenswrapper[4981]: I0128 15:15:03.209744 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2df18abc-5d9b-447b-997c-2e60e4f85bad-config-volume\") pod \"2df18abc-5d9b-447b-997c-2e60e4f85bad\" (UID: \"2df18abc-5d9b-447b-997c-2e60e4f85bad\") " Jan 28 15:15:03 crc kubenswrapper[4981]: I0128 15:15:03.210623 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2df18abc-5d9b-447b-997c-2e60e4f85bad-config-volume" (OuterVolumeSpecName: "config-volume") pod "2df18abc-5d9b-447b-997c-2e60e4f85bad" (UID: "2df18abc-5d9b-447b-997c-2e60e4f85bad"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:15:03 crc kubenswrapper[4981]: I0128 15:15:03.210830 4981 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2df18abc-5d9b-447b-997c-2e60e4f85bad-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:03 crc kubenswrapper[4981]: I0128 15:15:03.216687 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2df18abc-5d9b-447b-997c-2e60e4f85bad-kube-api-access-z2kz9" (OuterVolumeSpecName: "kube-api-access-z2kz9") pod "2df18abc-5d9b-447b-997c-2e60e4f85bad" (UID: "2df18abc-5d9b-447b-997c-2e60e4f85bad"). InnerVolumeSpecName "kube-api-access-z2kz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:15:03 crc kubenswrapper[4981]: I0128 15:15:03.216781 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2df18abc-5d9b-447b-997c-2e60e4f85bad-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2df18abc-5d9b-447b-997c-2e60e4f85bad" (UID: "2df18abc-5d9b-447b-997c-2e60e4f85bad"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:15:03 crc kubenswrapper[4981]: I0128 15:15:03.311833 4981 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2df18abc-5d9b-447b-997c-2e60e4f85bad-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:03 crc kubenswrapper[4981]: I0128 15:15:03.311896 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2kz9\" (UniqueName: \"kubernetes.io/projected/2df18abc-5d9b-447b-997c-2e60e4f85bad-kube-api-access-z2kz9\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:03 crc kubenswrapper[4981]: I0128 15:15:03.800965 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" event={"ID":"2df18abc-5d9b-447b-997c-2e60e4f85bad","Type":"ContainerDied","Data":"1ab51323a677e34fbbe009d3d0bc2e9937a448f9a07d062f24dbd53cc6985e75"} Jan 28 15:15:03 crc kubenswrapper[4981]: I0128 15:15:03.800998 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99" Jan 28 15:15:03 crc kubenswrapper[4981]: I0128 15:15:03.801244 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ab51323a677e34fbbe009d3d0bc2e9937a448f9a07d062f24dbd53cc6985e75" Jan 28 15:15:04 crc kubenswrapper[4981]: I0128 15:15:04.936984 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2ss7x"] Jan 28 15:15:04 crc kubenswrapper[4981]: I0128 15:15:04.937479 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovn-controller" containerID="cri-o://646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf" gracePeriod=30 Jan 28 15:15:04 crc kubenswrapper[4981]: I0128 15:15:04.937534 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="nbdb" containerID="cri-o://323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867" gracePeriod=30 Jan 28 15:15:04 crc kubenswrapper[4981]: I0128 15:15:04.937634 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="northd" containerID="cri-o://fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25" gracePeriod=30 Jan 28 15:15:04 crc kubenswrapper[4981]: I0128 15:15:04.937692 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c" gracePeriod=30 Jan 28 15:15:04 crc kubenswrapper[4981]: I0128 15:15:04.937735 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="kube-rbac-proxy-node" containerID="cri-o://e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211" gracePeriod=30 Jan 28 15:15:04 crc kubenswrapper[4981]: I0128 15:15:04.937775 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovn-acl-logging" containerID="cri-o://dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60" gracePeriod=30 Jan 28 15:15:04 crc kubenswrapper[4981]: I0128 15:15:04.938165 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="sbdb" containerID="cri-o://99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9" gracePeriod=30 Jan 28 15:15:04 crc kubenswrapper[4981]: I0128 15:15:04.978767 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" containerID="cri-o://bf005f62b2bbce81022a546c73d4104d001f145013a9720a31ce265c0c40b9ca" gracePeriod=30 Jan 28 15:15:05 crc kubenswrapper[4981]: E0128 15:15:05.397282 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9 is running failed: container process not found" containerID="99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 28 15:15:05 crc kubenswrapper[4981]: E0128 15:15:05.397342 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867 is running failed: container process not found" containerID="323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 28 15:15:05 crc kubenswrapper[4981]: E0128 15:15:05.397686 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9 is running failed: container process not found" containerID="99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 28 15:15:05 crc kubenswrapper[4981]: E0128 15:15:05.397916 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867 is running failed: container process not found" containerID="323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 28 15:15:05 crc kubenswrapper[4981]: E0128 15:15:05.397996 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9 is running failed: container process not found" containerID="99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Jan 28 15:15:05 crc kubenswrapper[4981]: E0128 15:15:05.398037 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="sbdb" Jan 28 15:15:05 crc kubenswrapper[4981]: E0128 15:15:05.398141 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867 is running failed: container process not found" containerID="323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Jan 28 15:15:05 crc kubenswrapper[4981]: E0128 15:15:05.398175 4981 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="nbdb" Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.561372 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-m9hjc" Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.822174 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lwvh4_3cd6b29e-682c-4aec-b039-70d6d75cbcbc/kube-multus/2.log" Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.823235 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lwvh4_3cd6b29e-682c-4aec-b039-70d6d75cbcbc/kube-multus/1.log" Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.823315 4981 generic.go:334] "Generic (PLEG): container finished" podID="3cd6b29e-682c-4aec-b039-70d6d75cbcbc" containerID="f303c909c2291ab319ea84a75c816fa5be8eb7515cc7e5cfd1c2bb7a8fd74c8b" exitCode=2 Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.823417 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lwvh4" event={"ID":"3cd6b29e-682c-4aec-b039-70d6d75cbcbc","Type":"ContainerDied","Data":"f303c909c2291ab319ea84a75c816fa5be8eb7515cc7e5cfd1c2bb7a8fd74c8b"} Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.823484 4981 scope.go:117] "RemoveContainer" containerID="e787c9c633e01ce0e62e64cb5468c84dcf7452433437f827989301a9ef122368" Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.824296 4981 scope.go:117] "RemoveContainer" containerID="f303c909c2291ab319ea84a75c816fa5be8eb7515cc7e5cfd1c2bb7a8fd74c8b" Jan 28 15:15:05 crc kubenswrapper[4981]: E0128 15:15:05.824707 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lwvh4_openshift-multus(3cd6b29e-682c-4aec-b039-70d6d75cbcbc)\"" pod="openshift-multus/multus-lwvh4" podUID="3cd6b29e-682c-4aec-b039-70d6d75cbcbc" Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.833729 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovnkube-controller/3.log" Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.842657 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovn-acl-logging/0.log" Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.843420 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovn-controller/0.log" Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.843922 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerID="bf005f62b2bbce81022a546c73d4104d001f145013a9720a31ce265c0c40b9ca" exitCode=0 Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.843941 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerID="99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9" exitCode=0 Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.843948 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerID="323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867" exitCode=0 Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.843956 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerID="fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25" exitCode=0 Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.843963 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerID="4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c" exitCode=0 Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.843971 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerID="e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211" exitCode=0 Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.843977 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerID="dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60" exitCode=143 Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.843984 4981 generic.go:334] "Generic (PLEG): container finished" podID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerID="646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf" exitCode=143 Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.844004 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerDied","Data":"bf005f62b2bbce81022a546c73d4104d001f145013a9720a31ce265c0c40b9ca"} Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.844029 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerDied","Data":"99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9"} Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.844039 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerDied","Data":"323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867"} Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.844047 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerDied","Data":"fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25"} Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.844056 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerDied","Data":"4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c"} Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.844065 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerDied","Data":"e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211"} Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.844073 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerDied","Data":"dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60"} Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.844081 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerDied","Data":"646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf"} Jan 28 15:15:05 crc kubenswrapper[4981]: I0128 15:15:05.946869 4981 scope.go:117] "RemoveContainer" containerID="8963eef891d43000aede79bee50cee3b058c3195ab3b2ba45f083ef0a156b46d" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.030457 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovn-acl-logging/0.log" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.030850 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovn-controller/0.log" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.032531 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.089670 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-m964q"] Jan 28 15:15:06 crc kubenswrapper[4981]: E0128 15:15:06.089886 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2df18abc-5d9b-447b-997c-2e60e4f85bad" containerName="collect-profiles" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.089898 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="2df18abc-5d9b-447b-997c-2e60e4f85bad" containerName="collect-profiles" Jan 28 15:15:06 crc kubenswrapper[4981]: E0128 15:15:06.089908 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovn-acl-logging" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.089913 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovn-acl-logging" Jan 28 15:15:06 crc kubenswrapper[4981]: E0128 15:15:06.089924 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.089930 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: E0128 15:15:06.089938 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.089943 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: E0128 15:15:06.089951 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="kubecfg-setup" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.089956 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="kubecfg-setup" Jan 28 15:15:06 crc kubenswrapper[4981]: E0128 15:15:06.089963 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="northd" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.089968 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="northd" Jan 28 15:15:06 crc kubenswrapper[4981]: E0128 15:15:06.089979 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.089984 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: E0128 15:15:06.089989 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="kube-rbac-proxy-node" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.089994 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="kube-rbac-proxy-node" Jan 28 15:15:06 crc kubenswrapper[4981]: E0128 15:15:06.090001 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090008 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:15:06 crc kubenswrapper[4981]: E0128 15:15:06.090015 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="sbdb" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090023 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="sbdb" Jan 28 15:15:06 crc kubenswrapper[4981]: E0128 15:15:06.090033 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovn-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090039 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovn-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: E0128 15:15:06.090049 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="nbdb" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090055 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="nbdb" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090135 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="sbdb" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090143 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="northd" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090148 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090159 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="kube-rbac-proxy-node" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090166 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="nbdb" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090173 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovn-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090181 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090241 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090247 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovn-acl-logging" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090255 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090261 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090268 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="2df18abc-5d9b-447b-997c-2e60e4f85bad" containerName="collect-profiles" Jan 28 15:15:06 crc kubenswrapper[4981]: E0128 15:15:06.090350 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090357 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: E0128 15:15:06.090367 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090373 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.090461 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" containerName="ovnkube-controller" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.092046 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.152922 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-openvswitch\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.153403 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-systemd\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.153496 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cbdbd481-8604-433f-823e-d77a8b8517a8-ovn-node-metrics-cert\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.153060 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.153622 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-etc-openvswitch\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.153688 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-ovn\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.153788 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-var-lib-openvswitch\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.153687 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.153708 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.153957 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-kubelet\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.153958 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.153998 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154021 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-slash\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154080 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-ovnkube-script-lib\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154098 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-node-log\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154127 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-log-socket\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154148 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154174 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fnr8\" (UniqueName: \"kubernetes.io/projected/cbdbd481-8604-433f-823e-d77a8b8517a8-kube-api-access-2fnr8\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154220 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-node-log" (OuterVolumeSpecName: "node-log") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154242 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154273 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-log-socket" (OuterVolumeSpecName: "log-socket") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154307 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154289 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-run-netns\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154386 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-ovnkube-config\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154411 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-cni-bin\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154434 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-cni-netd\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154451 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-env-overrides\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154466 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-run-ovn-kubernetes\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154497 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154501 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-systemd-units\") pod \"cbdbd481-8604-433f-823e-d77a8b8517a8\" (UID: \"cbdbd481-8604-433f-823e-d77a8b8517a8\") " Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154525 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154572 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154627 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154770 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-slash" (OuterVolumeSpecName: "host-slash") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154615 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154783 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154847 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.154811 4981 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.155079 4981 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.155132 4981 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.155201 4981 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.155273 4981 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.155327 4981 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.155377 4981 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.155434 4981 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.155484 4981 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.155532 4981 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.155579 4981 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-node-log\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.155627 4981 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-log-socket\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.155674 4981 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.159438 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbdbd481-8604-433f-823e-d77a8b8517a8-kube-api-access-2fnr8" (OuterVolumeSpecName: "kube-api-access-2fnr8") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "kube-api-access-2fnr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.159633 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cbdbd481-8604-433f-823e-d77a8b8517a8-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.168211 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "cbdbd481-8604-433f-823e-d77a8b8517a8" (UID: "cbdbd481-8604-433f-823e-d77a8b8517a8"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.257175 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-run-netns\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.257298 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-etc-openvswitch\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.257360 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-systemd-units\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.257717 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-run-openvswitch\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.257782 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/228f212c-639d-4982-88fe-a9af98c28918-env-overrides\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.257817 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.257904 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-log-socket\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.257982 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/228f212c-639d-4982-88fe-a9af98c28918-ovnkube-config\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258040 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-node-log\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258065 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-slash\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258113 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-run-ovn\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258276 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-cni-bin\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258334 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/228f212c-639d-4982-88fe-a9af98c28918-ovnkube-script-lib\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258390 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-run-ovn-kubernetes\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258433 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-var-lib-openvswitch\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258481 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-kubelet\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258528 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/228f212c-639d-4982-88fe-a9af98c28918-ovn-node-metrics-cert\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258583 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-cni-netd\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258631 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsbj6\" (UniqueName: \"kubernetes.io/projected/228f212c-639d-4982-88fe-a9af98c28918-kube-api-access-lsbj6\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258686 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-run-systemd\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258843 4981 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258880 4981 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258909 4981 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cbdbd481-8604-433f-823e-d77a8b8517a8-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258938 4981 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/cbdbd481-8604-433f-823e-d77a8b8517a8-host-slash\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258964 4981 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.258989 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fnr8\" (UniqueName: \"kubernetes.io/projected/cbdbd481-8604-433f-823e-d77a8b8517a8-kube-api-access-2fnr8\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.259014 4981 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cbdbd481-8604-433f-823e-d77a8b8517a8-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.360441 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/228f212c-639d-4982-88fe-a9af98c28918-ovn-node-metrics-cert\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.361479 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-cni-netd\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.361535 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsbj6\" (UniqueName: \"kubernetes.io/projected/228f212c-639d-4982-88fe-a9af98c28918-kube-api-access-lsbj6\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.361590 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-run-systemd\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.361652 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-cni-netd\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.361748 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-run-systemd\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.361752 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-run-netns\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.361805 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-run-netns\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.361842 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-etc-openvswitch\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.361896 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-systemd-units\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.361947 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-run-openvswitch\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.361960 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-etc-openvswitch\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.361991 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/228f212c-639d-4982-88fe-a9af98c28918-env-overrides\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.362019 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-systemd-units\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.362031 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-run-openvswitch\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.362089 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.362136 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.362149 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-log-socket\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.362247 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/228f212c-639d-4982-88fe-a9af98c28918-ovnkube-config\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.362258 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-log-socket\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.362442 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-node-log\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.362550 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-node-log\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.362637 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-slash\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.362829 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-slash\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.363013 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-run-ovn\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.363057 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/228f212c-639d-4982-88fe-a9af98c28918-env-overrides\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.363075 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-run-ovn\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.363179 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-cni-bin\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.363294 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/228f212c-639d-4982-88fe-a9af98c28918-ovnkube-script-lib\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.363337 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-run-ovn-kubernetes\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.363349 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/228f212c-639d-4982-88fe-a9af98c28918-ovnkube-config\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.363376 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-var-lib-openvswitch\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.363420 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-kubelet\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.363464 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-run-ovn-kubernetes\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.363497 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-var-lib-openvswitch\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.363540 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-kubelet\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.364653 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/228f212c-639d-4982-88fe-a9af98c28918-ovnkube-script-lib\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.365000 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/228f212c-639d-4982-88fe-a9af98c28918-ovn-node-metrics-cert\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.365110 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/228f212c-639d-4982-88fe-a9af98c28918-host-cni-bin\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.390866 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsbj6\" (UniqueName: \"kubernetes.io/projected/228f212c-639d-4982-88fe-a9af98c28918-kube-api-access-lsbj6\") pod \"ovnkube-node-m964q\" (UID: \"228f212c-639d-4982-88fe-a9af98c28918\") " pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.405973 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.854166 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lwvh4_3cd6b29e-682c-4aec-b039-70d6d75cbcbc/kube-multus/2.log" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.860816 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovn-acl-logging/0.log" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.862225 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-2ss7x_cbdbd481-8604-433f-823e-d77a8b8517a8/ovn-controller/0.log" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.862789 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" event={"ID":"cbdbd481-8604-433f-823e-d77a8b8517a8","Type":"ContainerDied","Data":"7a2ef33b9f1aa730ca2c7976800044be34eb3600e9b7b5e3edb61c768fb34bbc"} Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.862833 4981 scope.go:117] "RemoveContainer" containerID="bf005f62b2bbce81022a546c73d4104d001f145013a9720a31ce265c0c40b9ca" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.862894 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-2ss7x" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.864797 4981 generic.go:334] "Generic (PLEG): container finished" podID="228f212c-639d-4982-88fe-a9af98c28918" containerID="45374c5a65b094b9a8a57eafcbcc21bf9d7d7fc4a4623a3f4c3b11f86796dc29" exitCode=0 Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.864830 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" event={"ID":"228f212c-639d-4982-88fe-a9af98c28918","Type":"ContainerDied","Data":"45374c5a65b094b9a8a57eafcbcc21bf9d7d7fc4a4623a3f4c3b11f86796dc29"} Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.864849 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" event={"ID":"228f212c-639d-4982-88fe-a9af98c28918","Type":"ContainerStarted","Data":"caea1d147f8f8450d235cb8ea7aa39035a7264fd2cc8718fddb4958c2292e490"} Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.884447 4981 scope.go:117] "RemoveContainer" containerID="99c941d73daed176f9eadfe383a20608f5aebd1af5ccbf62bd7a6d07e85837e9" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.909395 4981 scope.go:117] "RemoveContainer" containerID="323b6bd4280f3e3a6e0fbf878b6879130b624516211021d4fbb00c482daa9867" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.935384 4981 scope.go:117] "RemoveContainer" containerID="fbcc1ee4c3a0500e0de3ed07e02139a27cafb4af5206249bafc21f76feec6b25" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.979109 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2ss7x"] Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.979472 4981 scope.go:117] "RemoveContainer" containerID="4cc89a36de7fd62eaa2e8663ca1e586a3d51a162d7d01ac32b3f6dbf71ce460c" Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.984828 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-2ss7x"] Jan 28 15:15:06 crc kubenswrapper[4981]: I0128 15:15:06.993301 4981 scope.go:117] "RemoveContainer" containerID="e5101929d02b45c28b5b2a6b4edd9a500afeced89ba25a3b9c82964f4a9bf211" Jan 28 15:15:07 crc kubenswrapper[4981]: I0128 15:15:07.007154 4981 scope.go:117] "RemoveContainer" containerID="dfe0743973a5fbb7422662228b113778a2889185867baf45bd0a2ed7c39a4c60" Jan 28 15:15:07 crc kubenswrapper[4981]: I0128 15:15:07.019283 4981 scope.go:117] "RemoveContainer" containerID="646dd80598a893b4b906d3f3d871f90ef4a523c0742f5e5b4da0e0548f5dadbf" Jan 28 15:15:07 crc kubenswrapper[4981]: I0128 15:15:07.035458 4981 scope.go:117] "RemoveContainer" containerID="832fc2677761ec2a4850a338e790caaaf8b949f5fb9b2dfab5b05553e513077c" Jan 28 15:15:07 crc kubenswrapper[4981]: I0128 15:15:07.325107 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbdbd481-8604-433f-823e-d77a8b8517a8" path="/var/lib/kubelet/pods/cbdbd481-8604-433f-823e-d77a8b8517a8/volumes" Jan 28 15:15:07 crc kubenswrapper[4981]: I0128 15:15:07.874772 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" event={"ID":"228f212c-639d-4982-88fe-a9af98c28918","Type":"ContainerStarted","Data":"2a2f507de9270a28865fbef22a0ab58a7dc632425e00ca21937fa500927ebfb6"} Jan 28 15:15:07 crc kubenswrapper[4981]: I0128 15:15:07.875091 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" event={"ID":"228f212c-639d-4982-88fe-a9af98c28918","Type":"ContainerStarted","Data":"a8d20f42922efd258626f616bbbe407a431763995f35ef142f93b9e3bfd0c83b"} Jan 28 15:15:07 crc kubenswrapper[4981]: I0128 15:15:07.875105 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" event={"ID":"228f212c-639d-4982-88fe-a9af98c28918","Type":"ContainerStarted","Data":"21c1ca9dcac007b99f4996585182eb43c0895f4a360d91b790cbac5a1d3f6172"} Jan 28 15:15:08 crc kubenswrapper[4981]: I0128 15:15:08.899832 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" event={"ID":"228f212c-639d-4982-88fe-a9af98c28918","Type":"ContainerStarted","Data":"16b695919217daf110568398407f5081ec0250e5d5bed2b7d8e2bdcd0339138b"} Jan 28 15:15:08 crc kubenswrapper[4981]: I0128 15:15:08.899883 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" event={"ID":"228f212c-639d-4982-88fe-a9af98c28918","Type":"ContainerStarted","Data":"16ba213c9787a08ed8f062a0e9e7006ab9d260d34f765e2ed753bc871e3b4da2"} Jan 28 15:15:08 crc kubenswrapper[4981]: I0128 15:15:08.899896 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" event={"ID":"228f212c-639d-4982-88fe-a9af98c28918","Type":"ContainerStarted","Data":"e9a2375de786d4af23eb2d40e25f221283c064146856cc47e1c4ad2153a20ab7"} Jan 28 15:15:10 crc kubenswrapper[4981]: I0128 15:15:10.914142 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" event={"ID":"228f212c-639d-4982-88fe-a9af98c28918","Type":"ContainerStarted","Data":"a13219d5f070fc5962f920d31037642e175cef03b75d92d46545321d07acfde7"} Jan 28 15:15:13 crc kubenswrapper[4981]: I0128 15:15:13.936343 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" event={"ID":"228f212c-639d-4982-88fe-a9af98c28918","Type":"ContainerStarted","Data":"7579ecabda6ba10e45ec94bb29f57aff9473bdb9ac54fc5e90239197d44a9511"} Jan 28 15:15:13 crc kubenswrapper[4981]: I0128 15:15:13.936724 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:13 crc kubenswrapper[4981]: I0128 15:15:13.990315 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" podStartSLOduration=7.990290648 podStartE2EDuration="7.990290648s" podCreationTimestamp="2026-01-28 15:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:15:13.986572292 +0000 UTC m=+725.438730543" watchObservedRunningTime="2026-01-28 15:15:13.990290648 +0000 UTC m=+725.442448889" Jan 28 15:15:13 crc kubenswrapper[4981]: I0128 15:15:13.991906 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:14 crc kubenswrapper[4981]: I0128 15:15:14.943171 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:14 crc kubenswrapper[4981]: I0128 15:15:14.943242 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:15 crc kubenswrapper[4981]: I0128 15:15:15.007556 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:19 crc kubenswrapper[4981]: I0128 15:15:19.320501 4981 scope.go:117] "RemoveContainer" containerID="f303c909c2291ab319ea84a75c816fa5be8eb7515cc7e5cfd1c2bb7a8fd74c8b" Jan 28 15:15:19 crc kubenswrapper[4981]: E0128 15:15:19.321005 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lwvh4_openshift-multus(3cd6b29e-682c-4aec-b039-70d6d75cbcbc)\"" pod="openshift-multus/multus-lwvh4" podUID="3cd6b29e-682c-4aec-b039-70d6d75cbcbc" Jan 28 15:15:34 crc kubenswrapper[4981]: I0128 15:15:34.319020 4981 scope.go:117] "RemoveContainer" containerID="f303c909c2291ab319ea84a75c816fa5be8eb7515cc7e5cfd1c2bb7a8fd74c8b" Jan 28 15:15:35 crc kubenswrapper[4981]: I0128 15:15:35.108157 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lwvh4_3cd6b29e-682c-4aec-b039-70d6d75cbcbc/kube-multus/2.log" Jan 28 15:15:35 crc kubenswrapper[4981]: I0128 15:15:35.108488 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lwvh4" event={"ID":"3cd6b29e-682c-4aec-b039-70d6d75cbcbc","Type":"ContainerStarted","Data":"0a1011b5885273385f119aedb7ceb581d55cd4b82bb60ecc76dcf4350d4b8111"} Jan 28 15:15:36 crc kubenswrapper[4981]: I0128 15:15:36.433429 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-m964q" Jan 28 15:15:42 crc kubenswrapper[4981]: I0128 15:15:42.922757 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl"] Jan 28 15:15:42 crc kubenswrapper[4981]: I0128 15:15:42.924981 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" Jan 28 15:15:42 crc kubenswrapper[4981]: I0128 15:15:42.926876 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 15:15:42 crc kubenswrapper[4981]: I0128 15:15:42.936519 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl"] Jan 28 15:15:43 crc kubenswrapper[4981]: I0128 15:15:43.077428 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts22r\" (UniqueName: \"kubernetes.io/projected/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-kube-api-access-ts22r\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl\" (UID: \"35228b73-1ad1-4fa7-9470-ba0f42f71c3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" Jan 28 15:15:43 crc kubenswrapper[4981]: I0128 15:15:43.077949 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl\" (UID: \"35228b73-1ad1-4fa7-9470-ba0f42f71c3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" Jan 28 15:15:43 crc kubenswrapper[4981]: I0128 15:15:43.078396 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl\" (UID: \"35228b73-1ad1-4fa7-9470-ba0f42f71c3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" Jan 28 15:15:43 crc kubenswrapper[4981]: I0128 15:15:43.179855 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts22r\" (UniqueName: \"kubernetes.io/projected/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-kube-api-access-ts22r\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl\" (UID: \"35228b73-1ad1-4fa7-9470-ba0f42f71c3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" Jan 28 15:15:43 crc kubenswrapper[4981]: I0128 15:15:43.180046 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl\" (UID: \"35228b73-1ad1-4fa7-9470-ba0f42f71c3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" Jan 28 15:15:43 crc kubenswrapper[4981]: I0128 15:15:43.180226 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl\" (UID: \"35228b73-1ad1-4fa7-9470-ba0f42f71c3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" Jan 28 15:15:43 crc kubenswrapper[4981]: I0128 15:15:43.181376 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl\" (UID: \"35228b73-1ad1-4fa7-9470-ba0f42f71c3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" Jan 28 15:15:43 crc kubenswrapper[4981]: I0128 15:15:43.181385 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl\" (UID: \"35228b73-1ad1-4fa7-9470-ba0f42f71c3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" Jan 28 15:15:43 crc kubenswrapper[4981]: I0128 15:15:43.211582 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts22r\" (UniqueName: \"kubernetes.io/projected/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-kube-api-access-ts22r\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl\" (UID: \"35228b73-1ad1-4fa7-9470-ba0f42f71c3f\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" Jan 28 15:15:43 crc kubenswrapper[4981]: I0128 15:15:43.247637 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" Jan 28 15:15:43 crc kubenswrapper[4981]: I0128 15:15:43.472786 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl"] Jan 28 15:15:44 crc kubenswrapper[4981]: I0128 15:15:44.168564 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" event={"ID":"35228b73-1ad1-4fa7-9470-ba0f42f71c3f","Type":"ContainerStarted","Data":"66d66ff146d542d6cf14465d7d0358416aa77b1de3e0ce324ee8125d48743116"} Jan 28 15:15:44 crc kubenswrapper[4981]: I0128 15:15:44.170402 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" event={"ID":"35228b73-1ad1-4fa7-9470-ba0f42f71c3f","Type":"ContainerStarted","Data":"5798c2a862d6657b0c8bd2c3dc150f78587cd90a94a0d063a155ef435c03f955"} Jan 28 15:15:45 crc kubenswrapper[4981]: I0128 15:15:45.179026 4981 generic.go:334] "Generic (PLEG): container finished" podID="35228b73-1ad1-4fa7-9470-ba0f42f71c3f" containerID="66d66ff146d542d6cf14465d7d0358416aa77b1de3e0ce324ee8125d48743116" exitCode=0 Jan 28 15:15:45 crc kubenswrapper[4981]: I0128 15:15:45.179149 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" event={"ID":"35228b73-1ad1-4fa7-9470-ba0f42f71c3f","Type":"ContainerDied","Data":"66d66ff146d542d6cf14465d7d0358416aa77b1de3e0ce324ee8125d48743116"} Jan 28 15:15:50 crc kubenswrapper[4981]: I0128 15:15:50.217987 4981 generic.go:334] "Generic (PLEG): container finished" podID="35228b73-1ad1-4fa7-9470-ba0f42f71c3f" containerID="a77e1ab4e66377c9474be74658d7a76f5a5228b8b36edfabdba5e1b9327d6895" exitCode=0 Jan 28 15:15:50 crc kubenswrapper[4981]: I0128 15:15:50.218229 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" event={"ID":"35228b73-1ad1-4fa7-9470-ba0f42f71c3f","Type":"ContainerDied","Data":"a77e1ab4e66377c9474be74658d7a76f5a5228b8b36edfabdba5e1b9327d6895"} Jan 28 15:15:51 crc kubenswrapper[4981]: I0128 15:15:51.229929 4981 generic.go:334] "Generic (PLEG): container finished" podID="35228b73-1ad1-4fa7-9470-ba0f42f71c3f" containerID="5f3815abf408a81950fda8a3389c4c1edf8e95d0e62407ebe8fef1e3737ada16" exitCode=0 Jan 28 15:15:51 crc kubenswrapper[4981]: I0128 15:15:51.230033 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" event={"ID":"35228b73-1ad1-4fa7-9470-ba0f42f71c3f","Type":"ContainerDied","Data":"5f3815abf408a81950fda8a3389c4c1edf8e95d0e62407ebe8fef1e3737ada16"} Jan 28 15:15:51 crc kubenswrapper[4981]: I0128 15:15:51.487271 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mhc2x"] Jan 28 15:15:51 crc kubenswrapper[4981]: I0128 15:15:51.490084 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:15:51 crc kubenswrapper[4981]: I0128 15:15:51.508922 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mhc2x"] Jan 28 15:15:51 crc kubenswrapper[4981]: I0128 15:15:51.605125 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4619cd2e-105f-434b-a0a9-085eef609c9f-catalog-content\") pod \"redhat-operators-mhc2x\" (UID: \"4619cd2e-105f-434b-a0a9-085eef609c9f\") " pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:15:51 crc kubenswrapper[4981]: I0128 15:15:51.605246 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbtgv\" (UniqueName: \"kubernetes.io/projected/4619cd2e-105f-434b-a0a9-085eef609c9f-kube-api-access-dbtgv\") pod \"redhat-operators-mhc2x\" (UID: \"4619cd2e-105f-434b-a0a9-085eef609c9f\") " pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:15:51 crc kubenswrapper[4981]: I0128 15:15:51.605275 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4619cd2e-105f-434b-a0a9-085eef609c9f-utilities\") pod \"redhat-operators-mhc2x\" (UID: \"4619cd2e-105f-434b-a0a9-085eef609c9f\") " pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:15:51 crc kubenswrapper[4981]: I0128 15:15:51.706999 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4619cd2e-105f-434b-a0a9-085eef609c9f-utilities\") pod \"redhat-operators-mhc2x\" (UID: \"4619cd2e-105f-434b-a0a9-085eef609c9f\") " pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:15:51 crc kubenswrapper[4981]: I0128 15:15:51.707102 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4619cd2e-105f-434b-a0a9-085eef609c9f-catalog-content\") pod \"redhat-operators-mhc2x\" (UID: \"4619cd2e-105f-434b-a0a9-085eef609c9f\") " pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:15:51 crc kubenswrapper[4981]: I0128 15:15:51.707184 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbtgv\" (UniqueName: \"kubernetes.io/projected/4619cd2e-105f-434b-a0a9-085eef609c9f-kube-api-access-dbtgv\") pod \"redhat-operators-mhc2x\" (UID: \"4619cd2e-105f-434b-a0a9-085eef609c9f\") " pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:15:51 crc kubenswrapper[4981]: I0128 15:15:51.707727 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4619cd2e-105f-434b-a0a9-085eef609c9f-utilities\") pod \"redhat-operators-mhc2x\" (UID: \"4619cd2e-105f-434b-a0a9-085eef609c9f\") " pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:15:51 crc kubenswrapper[4981]: I0128 15:15:51.707845 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4619cd2e-105f-434b-a0a9-085eef609c9f-catalog-content\") pod \"redhat-operators-mhc2x\" (UID: \"4619cd2e-105f-434b-a0a9-085eef609c9f\") " pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:15:51 crc kubenswrapper[4981]: I0128 15:15:51.733428 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbtgv\" (UniqueName: \"kubernetes.io/projected/4619cd2e-105f-434b-a0a9-085eef609c9f-kube-api-access-dbtgv\") pod \"redhat-operators-mhc2x\" (UID: \"4619cd2e-105f-434b-a0a9-085eef609c9f\") " pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:15:51 crc kubenswrapper[4981]: I0128 15:15:51.818611 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:15:52 crc kubenswrapper[4981]: I0128 15:15:52.291808 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mhc2x"] Jan 28 15:15:52 crc kubenswrapper[4981]: I0128 15:15:52.533647 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" Jan 28 15:15:52 crc kubenswrapper[4981]: I0128 15:15:52.721748 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-bundle\") pod \"35228b73-1ad1-4fa7-9470-ba0f42f71c3f\" (UID: \"35228b73-1ad1-4fa7-9470-ba0f42f71c3f\") " Jan 28 15:15:52 crc kubenswrapper[4981]: I0128 15:15:52.721862 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-util\") pod \"35228b73-1ad1-4fa7-9470-ba0f42f71c3f\" (UID: \"35228b73-1ad1-4fa7-9470-ba0f42f71c3f\") " Jan 28 15:15:52 crc kubenswrapper[4981]: I0128 15:15:52.721924 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts22r\" (UniqueName: \"kubernetes.io/projected/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-kube-api-access-ts22r\") pod \"35228b73-1ad1-4fa7-9470-ba0f42f71c3f\" (UID: \"35228b73-1ad1-4fa7-9470-ba0f42f71c3f\") " Jan 28 15:15:52 crc kubenswrapper[4981]: I0128 15:15:52.722531 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-bundle" (OuterVolumeSpecName: "bundle") pod "35228b73-1ad1-4fa7-9470-ba0f42f71c3f" (UID: "35228b73-1ad1-4fa7-9470-ba0f42f71c3f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:15:52 crc kubenswrapper[4981]: I0128 15:15:52.728451 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-kube-api-access-ts22r" (OuterVolumeSpecName: "kube-api-access-ts22r") pod "35228b73-1ad1-4fa7-9470-ba0f42f71c3f" (UID: "35228b73-1ad1-4fa7-9470-ba0f42f71c3f"). InnerVolumeSpecName "kube-api-access-ts22r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:15:52 crc kubenswrapper[4981]: I0128 15:15:52.736536 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-util" (OuterVolumeSpecName: "util") pod "35228b73-1ad1-4fa7-9470-ba0f42f71c3f" (UID: "35228b73-1ad1-4fa7-9470-ba0f42f71c3f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:15:52 crc kubenswrapper[4981]: I0128 15:15:52.823099 4981 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:52 crc kubenswrapper[4981]: I0128 15:15:52.823137 4981 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:52 crc kubenswrapper[4981]: I0128 15:15:52.823159 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts22r\" (UniqueName: \"kubernetes.io/projected/35228b73-1ad1-4fa7-9470-ba0f42f71c3f-kube-api-access-ts22r\") on node \"crc\" DevicePath \"\"" Jan 28 15:15:53 crc kubenswrapper[4981]: I0128 15:15:53.246230 4981 generic.go:334] "Generic (PLEG): container finished" podID="4619cd2e-105f-434b-a0a9-085eef609c9f" containerID="f1454339e50b20d41a5724f409ff2c0ffacb3d8c22765c9f146d6f5fd189e9da" exitCode=0 Jan 28 15:15:53 crc kubenswrapper[4981]: I0128 15:15:53.246302 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhc2x" event={"ID":"4619cd2e-105f-434b-a0a9-085eef609c9f","Type":"ContainerDied","Data":"f1454339e50b20d41a5724f409ff2c0ffacb3d8c22765c9f146d6f5fd189e9da"} Jan 28 15:15:53 crc kubenswrapper[4981]: I0128 15:15:53.246362 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhc2x" event={"ID":"4619cd2e-105f-434b-a0a9-085eef609c9f","Type":"ContainerStarted","Data":"01e54184a3505415085be78d793d673a504f14fe393f12f50306a8ce241d331f"} Jan 28 15:15:53 crc kubenswrapper[4981]: I0128 15:15:53.252160 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" event={"ID":"35228b73-1ad1-4fa7-9470-ba0f42f71c3f","Type":"ContainerDied","Data":"5798c2a862d6657b0c8bd2c3dc150f78587cd90a94a0d063a155ef435c03f955"} Jan 28 15:15:53 crc kubenswrapper[4981]: I0128 15:15:53.252250 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5798c2a862d6657b0c8bd2c3dc150f78587cd90a94a0d063a155ef435c03f955" Jan 28 15:15:53 crc kubenswrapper[4981]: I0128 15:15:53.252376 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl" Jan 28 15:15:54 crc kubenswrapper[4981]: I0128 15:15:54.278317 4981 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 15:15:55 crc kubenswrapper[4981]: I0128 15:15:55.268548 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhc2x" event={"ID":"4619cd2e-105f-434b-a0a9-085eef609c9f","Type":"ContainerStarted","Data":"50a81a27d2922082a70c8212fbf27d982f62edf3bc2d277cff5d780ee361354e"} Jan 28 15:15:56 crc kubenswrapper[4981]: I0128 15:15:56.279394 4981 generic.go:334] "Generic (PLEG): container finished" podID="4619cd2e-105f-434b-a0a9-085eef609c9f" containerID="50a81a27d2922082a70c8212fbf27d982f62edf3bc2d277cff5d780ee361354e" exitCode=0 Jan 28 15:15:56 crc kubenswrapper[4981]: I0128 15:15:56.279467 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhc2x" event={"ID":"4619cd2e-105f-434b-a0a9-085eef609c9f","Type":"ContainerDied","Data":"50a81a27d2922082a70c8212fbf27d982f62edf3bc2d277cff5d780ee361354e"} Jan 28 15:15:57 crc kubenswrapper[4981]: I0128 15:15:57.303792 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhc2x" event={"ID":"4619cd2e-105f-434b-a0a9-085eef609c9f","Type":"ContainerStarted","Data":"7daaa66a5a1a9121e238d3dcd28c93ec834b73de2ba35897306061d29fd1c28d"} Jan 28 15:15:58 crc kubenswrapper[4981]: I0128 15:15:58.552633 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mhc2x" podStartSLOduration=3.844582995 podStartE2EDuration="7.552613858s" podCreationTimestamp="2026-01-28 15:15:51 +0000 UTC" firstStartedPulling="2026-01-28 15:15:53.248630614 +0000 UTC m=+764.700788895" lastFinishedPulling="2026-01-28 15:15:56.956661507 +0000 UTC m=+768.408819758" observedRunningTime="2026-01-28 15:15:57.327010151 +0000 UTC m=+768.779168432" watchObservedRunningTime="2026-01-28 15:15:58.552613858 +0000 UTC m=+770.004772099" Jan 28 15:15:58 crc kubenswrapper[4981]: I0128 15:15:58.556159 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-m7qf9"] Jan 28 15:15:58 crc kubenswrapper[4981]: E0128 15:15:58.556400 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35228b73-1ad1-4fa7-9470-ba0f42f71c3f" containerName="pull" Jan 28 15:15:58 crc kubenswrapper[4981]: I0128 15:15:58.556415 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="35228b73-1ad1-4fa7-9470-ba0f42f71c3f" containerName="pull" Jan 28 15:15:58 crc kubenswrapper[4981]: E0128 15:15:58.556434 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35228b73-1ad1-4fa7-9470-ba0f42f71c3f" containerName="extract" Jan 28 15:15:58 crc kubenswrapper[4981]: I0128 15:15:58.556441 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="35228b73-1ad1-4fa7-9470-ba0f42f71c3f" containerName="extract" Jan 28 15:15:58 crc kubenswrapper[4981]: E0128 15:15:58.556450 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35228b73-1ad1-4fa7-9470-ba0f42f71c3f" containerName="util" Jan 28 15:15:58 crc kubenswrapper[4981]: I0128 15:15:58.556456 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="35228b73-1ad1-4fa7-9470-ba0f42f71c3f" containerName="util" Jan 28 15:15:58 crc kubenswrapper[4981]: I0128 15:15:58.556540 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="35228b73-1ad1-4fa7-9470-ba0f42f71c3f" containerName="extract" Jan 28 15:15:58 crc kubenswrapper[4981]: I0128 15:15:58.556886 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-m7qf9" Jan 28 15:15:58 crc kubenswrapper[4981]: I0128 15:15:58.558983 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 28 15:15:58 crc kubenswrapper[4981]: I0128 15:15:58.559323 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 28 15:15:58 crc kubenswrapper[4981]: I0128 15:15:58.560021 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-qmrb2" Jan 28 15:15:58 crc kubenswrapper[4981]: I0128 15:15:58.572816 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-m7qf9"] Jan 28 15:15:58 crc kubenswrapper[4981]: I0128 15:15:58.708902 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2ds5\" (UniqueName: \"kubernetes.io/projected/44612930-8e0e-4893-9f15-58b828449dbb-kube-api-access-b2ds5\") pod \"nmstate-operator-646758c888-m7qf9\" (UID: \"44612930-8e0e-4893-9f15-58b828449dbb\") " pod="openshift-nmstate/nmstate-operator-646758c888-m7qf9" Jan 28 15:15:58 crc kubenswrapper[4981]: I0128 15:15:58.809760 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2ds5\" (UniqueName: \"kubernetes.io/projected/44612930-8e0e-4893-9f15-58b828449dbb-kube-api-access-b2ds5\") pod \"nmstate-operator-646758c888-m7qf9\" (UID: \"44612930-8e0e-4893-9f15-58b828449dbb\") " pod="openshift-nmstate/nmstate-operator-646758c888-m7qf9" Jan 28 15:15:58 crc kubenswrapper[4981]: I0128 15:15:58.827631 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2ds5\" (UniqueName: \"kubernetes.io/projected/44612930-8e0e-4893-9f15-58b828449dbb-kube-api-access-b2ds5\") pod \"nmstate-operator-646758c888-m7qf9\" (UID: \"44612930-8e0e-4893-9f15-58b828449dbb\") " pod="openshift-nmstate/nmstate-operator-646758c888-m7qf9" Jan 28 15:15:58 crc kubenswrapper[4981]: I0128 15:15:58.873230 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-m7qf9" Jan 28 15:15:59 crc kubenswrapper[4981]: I0128 15:15:59.285302 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-m7qf9"] Jan 28 15:15:59 crc kubenswrapper[4981]: I0128 15:15:59.315082 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-m7qf9" event={"ID":"44612930-8e0e-4893-9f15-58b828449dbb","Type":"ContainerStarted","Data":"ecda252f1caf98a48fc3dba09aac306aedac326fec69fa9722d006125b52a7f1"} Jan 28 15:16:01 crc kubenswrapper[4981]: I0128 15:16:01.819831 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:16:01 crc kubenswrapper[4981]: I0128 15:16:01.820380 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:16:02 crc kubenswrapper[4981]: I0128 15:16:02.335537 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-m7qf9" event={"ID":"44612930-8e0e-4893-9f15-58b828449dbb","Type":"ContainerStarted","Data":"4404de45ec167d15f252a723e15b330643038faa0f0809106c1208987df861a4"} Jan 28 15:16:02 crc kubenswrapper[4981]: I0128 15:16:02.366094 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-m7qf9" podStartSLOduration=2.100339509 podStartE2EDuration="4.366072026s" podCreationTimestamp="2026-01-28 15:15:58 +0000 UTC" firstStartedPulling="2026-01-28 15:15:59.297343026 +0000 UTC m=+770.749501307" lastFinishedPulling="2026-01-28 15:16:01.563075583 +0000 UTC m=+773.015233824" observedRunningTime="2026-01-28 15:16:02.361314176 +0000 UTC m=+773.813472477" watchObservedRunningTime="2026-01-28 15:16:02.366072026 +0000 UTC m=+773.818230267" Jan 28 15:16:02 crc kubenswrapper[4981]: I0128 15:16:02.874793 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mhc2x" podUID="4619cd2e-105f-434b-a0a9-085eef609c9f" containerName="registry-server" probeResult="failure" output=< Jan 28 15:16:02 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Jan 28 15:16:02 crc kubenswrapper[4981]: > Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.305380 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-lps8k"] Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.307514 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-lps8k" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.311302 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-t9mzr" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.312889 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t"] Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.313741 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.318958 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.326385 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-lps8k"] Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.333287 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-kggn8"] Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.334085 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.340973 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t"] Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.362016 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t54j\" (UniqueName: \"kubernetes.io/projected/ee7ee978-971b-4e70-ac41-8a6c8f10b226-kube-api-access-9t54j\") pod \"nmstate-handler-kggn8\" (UID: \"ee7ee978-971b-4e70-ac41-8a6c8f10b226\") " pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.362087 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg5sj\" (UniqueName: \"kubernetes.io/projected/efc29e7c-2d98-40bb-8335-5c763f217be4-kube-api-access-kg5sj\") pod \"nmstate-webhook-8474b5b9d8-knc2t\" (UID: \"efc29e7c-2d98-40bb-8335-5c763f217be4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.362144 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/efc29e7c-2d98-40bb-8335-5c763f217be4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-knc2t\" (UID: \"efc29e7c-2d98-40bb-8335-5c763f217be4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.362233 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdwrb\" (UniqueName: \"kubernetes.io/projected/802eac95-d452-45f0-b0a2-765f410e4a6c-kube-api-access-qdwrb\") pod \"nmstate-metrics-54757c584b-lps8k\" (UID: \"802eac95-d452-45f0-b0a2-765f410e4a6c\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-lps8k" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.362271 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ee7ee978-971b-4e70-ac41-8a6c8f10b226-ovs-socket\") pod \"nmstate-handler-kggn8\" (UID: \"ee7ee978-971b-4e70-ac41-8a6c8f10b226\") " pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.362354 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ee7ee978-971b-4e70-ac41-8a6c8f10b226-nmstate-lock\") pod \"nmstate-handler-kggn8\" (UID: \"ee7ee978-971b-4e70-ac41-8a6c8f10b226\") " pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.362424 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ee7ee978-971b-4e70-ac41-8a6c8f10b226-dbus-socket\") pod \"nmstate-handler-kggn8\" (UID: \"ee7ee978-971b-4e70-ac41-8a6c8f10b226\") " pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.463539 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ee7ee978-971b-4e70-ac41-8a6c8f10b226-nmstate-lock\") pod \"nmstate-handler-kggn8\" (UID: \"ee7ee978-971b-4e70-ac41-8a6c8f10b226\") " pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.463590 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ee7ee978-971b-4e70-ac41-8a6c8f10b226-dbus-socket\") pod \"nmstate-handler-kggn8\" (UID: \"ee7ee978-971b-4e70-ac41-8a6c8f10b226\") " pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.463791 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/ee7ee978-971b-4e70-ac41-8a6c8f10b226-nmstate-lock\") pod \"nmstate-handler-kggn8\" (UID: \"ee7ee978-971b-4e70-ac41-8a6c8f10b226\") " pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.463890 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t54j\" (UniqueName: \"kubernetes.io/projected/ee7ee978-971b-4e70-ac41-8a6c8f10b226-kube-api-access-9t54j\") pod \"nmstate-handler-kggn8\" (UID: \"ee7ee978-971b-4e70-ac41-8a6c8f10b226\") " pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.463927 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kg5sj\" (UniqueName: \"kubernetes.io/projected/efc29e7c-2d98-40bb-8335-5c763f217be4-kube-api-access-kg5sj\") pod \"nmstate-webhook-8474b5b9d8-knc2t\" (UID: \"efc29e7c-2d98-40bb-8335-5c763f217be4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.463959 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/efc29e7c-2d98-40bb-8335-5c763f217be4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-knc2t\" (UID: \"efc29e7c-2d98-40bb-8335-5c763f217be4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.463999 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdwrb\" (UniqueName: \"kubernetes.io/projected/802eac95-d452-45f0-b0a2-765f410e4a6c-kube-api-access-qdwrb\") pod \"nmstate-metrics-54757c584b-lps8k\" (UID: \"802eac95-d452-45f0-b0a2-765f410e4a6c\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-lps8k" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.464025 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ee7ee978-971b-4e70-ac41-8a6c8f10b226-ovs-socket\") pod \"nmstate-handler-kggn8\" (UID: \"ee7ee978-971b-4e70-ac41-8a6c8f10b226\") " pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.464090 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/ee7ee978-971b-4e70-ac41-8a6c8f10b226-ovs-socket\") pod \"nmstate-handler-kggn8\" (UID: \"ee7ee978-971b-4e70-ac41-8a6c8f10b226\") " pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.464396 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/ee7ee978-971b-4e70-ac41-8a6c8f10b226-dbus-socket\") pod \"nmstate-handler-kggn8\" (UID: \"ee7ee978-971b-4e70-ac41-8a6c8f10b226\") " pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.472920 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb"] Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.473637 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.476757 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.478585 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.478777 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zdhxj" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.490088 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/efc29e7c-2d98-40bb-8335-5c763f217be4-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-knc2t\" (UID: \"efc29e7c-2d98-40bb-8335-5c763f217be4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.493783 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb"] Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.494133 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdwrb\" (UniqueName: \"kubernetes.io/projected/802eac95-d452-45f0-b0a2-765f410e4a6c-kube-api-access-qdwrb\") pod \"nmstate-metrics-54757c584b-lps8k\" (UID: \"802eac95-d452-45f0-b0a2-765f410e4a6c\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-lps8k" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.501236 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg5sj\" (UniqueName: \"kubernetes.io/projected/efc29e7c-2d98-40bb-8335-5c763f217be4-kube-api-access-kg5sj\") pod \"nmstate-webhook-8474b5b9d8-knc2t\" (UID: \"efc29e7c-2d98-40bb-8335-5c763f217be4\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.520476 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t54j\" (UniqueName: \"kubernetes.io/projected/ee7ee978-971b-4e70-ac41-8a6c8f10b226-kube-api-access-9t54j\") pod \"nmstate-handler-kggn8\" (UID: \"ee7ee978-971b-4e70-ac41-8a6c8f10b226\") " pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.565719 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4078e50-5cc6-45b4-8a9a-3a37c51537fa-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mnsnb\" (UID: \"e4078e50-5cc6-45b4-8a9a-3a37c51537fa\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.565776 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e4078e50-5cc6-45b4-8a9a-3a37c51537fa-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mnsnb\" (UID: \"e4078e50-5cc6-45b4-8a9a-3a37c51537fa\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.565856 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhthz\" (UniqueName: \"kubernetes.io/projected/e4078e50-5cc6-45b4-8a9a-3a37c51537fa-kube-api-access-zhthz\") pod \"nmstate-console-plugin-7754f76f8b-mnsnb\" (UID: \"e4078e50-5cc6-45b4-8a9a-3a37c51537fa\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.632422 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-lps8k" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.656257 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.656290 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-84f68f79fb-wzhl4"] Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.656980 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.666820 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4078e50-5cc6-45b4-8a9a-3a37c51537fa-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mnsnb\" (UID: \"e4078e50-5cc6-45b4-8a9a-3a37c51537fa\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.666862 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c378748e-abd2-467f-99d7-19107ea64705-console-oauth-config\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.666887 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e4078e50-5cc6-45b4-8a9a-3a37c51537fa-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mnsnb\" (UID: \"e4078e50-5cc6-45b4-8a9a-3a37c51537fa\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.666915 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c378748e-abd2-467f-99d7-19107ea64705-console-config\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.666941 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c378748e-abd2-467f-99d7-19107ea64705-trusted-ca-bundle\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.666955 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c378748e-abd2-467f-99d7-19107ea64705-service-ca\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.666981 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c378748e-abd2-467f-99d7-19107ea64705-oauth-serving-cert\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.666997 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhthz\" (UniqueName: \"kubernetes.io/projected/e4078e50-5cc6-45b4-8a9a-3a37c51537fa-kube-api-access-zhthz\") pod \"nmstate-console-plugin-7754f76f8b-mnsnb\" (UID: \"e4078e50-5cc6-45b4-8a9a-3a37c51537fa\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.667019 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c378748e-abd2-467f-99d7-19107ea64705-console-serving-cert\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.667038 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzkzr\" (UniqueName: \"kubernetes.io/projected/c378748e-abd2-467f-99d7-19107ea64705-kube-api-access-pzkzr\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.667792 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/e4078e50-5cc6-45b4-8a9a-3a37c51537fa-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-mnsnb\" (UID: \"e4078e50-5cc6-45b4-8a9a-3a37c51537fa\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.670155 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84f68f79fb-wzhl4"] Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.670702 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/e4078e50-5cc6-45b4-8a9a-3a37c51537fa-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-mnsnb\" (UID: \"e4078e50-5cc6-45b4-8a9a-3a37c51537fa\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.675429 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.691850 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhthz\" (UniqueName: \"kubernetes.io/projected/e4078e50-5cc6-45b4-8a9a-3a37c51537fa-kube-api-access-zhthz\") pod \"nmstate-console-plugin-7754f76f8b-mnsnb\" (UID: \"e4078e50-5cc6-45b4-8a9a-3a37c51537fa\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb" Jan 28 15:16:03 crc kubenswrapper[4981]: W0128 15:16:03.724129 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podee7ee978_971b_4e70_ac41_8a6c8f10b226.slice/crio-ed571edd6a248e17ebdc2783372048b13de8b7630444a190a424089b3b26ebf8 WatchSource:0}: Error finding container ed571edd6a248e17ebdc2783372048b13de8b7630444a190a424089b3b26ebf8: Status 404 returned error can't find the container with id ed571edd6a248e17ebdc2783372048b13de8b7630444a190a424089b3b26ebf8 Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.769865 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c378748e-abd2-467f-99d7-19107ea64705-console-oauth-config\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.770146 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c378748e-abd2-467f-99d7-19107ea64705-console-config\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.770204 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c378748e-abd2-467f-99d7-19107ea64705-trusted-ca-bundle\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.770227 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c378748e-abd2-467f-99d7-19107ea64705-service-ca\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.770275 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c378748e-abd2-467f-99d7-19107ea64705-oauth-serving-cert\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.770313 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c378748e-abd2-467f-99d7-19107ea64705-console-serving-cert\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.770342 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzkzr\" (UniqueName: \"kubernetes.io/projected/c378748e-abd2-467f-99d7-19107ea64705-kube-api-access-pzkzr\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.772381 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c378748e-abd2-467f-99d7-19107ea64705-service-ca\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.772542 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c378748e-abd2-467f-99d7-19107ea64705-oauth-serving-cert\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.772912 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c378748e-abd2-467f-99d7-19107ea64705-console-config\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.773673 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c378748e-abd2-467f-99d7-19107ea64705-trusted-ca-bundle\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.782834 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c378748e-abd2-467f-99d7-19107ea64705-console-serving-cert\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.783373 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c378748e-abd2-467f-99d7-19107ea64705-console-oauth-config\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.792126 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzkzr\" (UniqueName: \"kubernetes.io/projected/c378748e-abd2-467f-99d7-19107ea64705-kube-api-access-pzkzr\") pod \"console-84f68f79fb-wzhl4\" (UID: \"c378748e-abd2-467f-99d7-19107ea64705\") " pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.854051 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb" Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.938667 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-lps8k"] Jan 28 15:16:03 crc kubenswrapper[4981]: I0128 15:16:03.980139 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:04 crc kubenswrapper[4981]: I0128 15:16:04.072829 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb"] Jan 28 15:16:04 crc kubenswrapper[4981]: I0128 15:16:04.104276 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t"] Jan 28 15:16:04 crc kubenswrapper[4981]: W0128 15:16:04.107460 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefc29e7c_2d98_40bb_8335_5c763f217be4.slice/crio-5943689c5664104d0f1918d0c07ad5cfa13fe997fb81815cbb2155a8e84d99ad WatchSource:0}: Error finding container 5943689c5664104d0f1918d0c07ad5cfa13fe997fb81815cbb2155a8e84d99ad: Status 404 returned error can't find the container with id 5943689c5664104d0f1918d0c07ad5cfa13fe997fb81815cbb2155a8e84d99ad Jan 28 15:16:04 crc kubenswrapper[4981]: I0128 15:16:04.185602 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-84f68f79fb-wzhl4"] Jan 28 15:16:04 crc kubenswrapper[4981]: W0128 15:16:04.191935 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc378748e_abd2_467f_99d7_19107ea64705.slice/crio-d0a354010cd35ac56606ce858550a7608a4e27e23dfa1d2c557356b6a6ba8acb WatchSource:0}: Error finding container d0a354010cd35ac56606ce858550a7608a4e27e23dfa1d2c557356b6a6ba8acb: Status 404 returned error can't find the container with id d0a354010cd35ac56606ce858550a7608a4e27e23dfa1d2c557356b6a6ba8acb Jan 28 15:16:04 crc kubenswrapper[4981]: I0128 15:16:04.353072 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84f68f79fb-wzhl4" event={"ID":"c378748e-abd2-467f-99d7-19107ea64705","Type":"ContainerStarted","Data":"1e5ce69ac2e059e24db7393490b2c3fc7db0499dc00f6a61b42a39f94361a0f9"} Jan 28 15:16:04 crc kubenswrapper[4981]: I0128 15:16:04.353134 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-84f68f79fb-wzhl4" event={"ID":"c378748e-abd2-467f-99d7-19107ea64705","Type":"ContainerStarted","Data":"d0a354010cd35ac56606ce858550a7608a4e27e23dfa1d2c557356b6a6ba8acb"} Jan 28 15:16:04 crc kubenswrapper[4981]: I0128 15:16:04.355851 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb" event={"ID":"e4078e50-5cc6-45b4-8a9a-3a37c51537fa","Type":"ContainerStarted","Data":"38b754e0e80066ee99da74995a08ac25abaeb3d81a908d31abaf1072ab48b748"} Jan 28 15:16:04 crc kubenswrapper[4981]: I0128 15:16:04.357793 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kggn8" event={"ID":"ee7ee978-971b-4e70-ac41-8a6c8f10b226","Type":"ContainerStarted","Data":"ed571edd6a248e17ebdc2783372048b13de8b7630444a190a424089b3b26ebf8"} Jan 28 15:16:04 crc kubenswrapper[4981]: I0128 15:16:04.358765 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t" event={"ID":"efc29e7c-2d98-40bb-8335-5c763f217be4","Type":"ContainerStarted","Data":"5943689c5664104d0f1918d0c07ad5cfa13fe997fb81815cbb2155a8e84d99ad"} Jan 28 15:16:04 crc kubenswrapper[4981]: I0128 15:16:04.359748 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-lps8k" event={"ID":"802eac95-d452-45f0-b0a2-765f410e4a6c","Type":"ContainerStarted","Data":"735643954c46d3b04cd34ddeb95a6a9a9668581932ac91d3739fc6bf0ea9b5ae"} Jan 28 15:16:04 crc kubenswrapper[4981]: I0128 15:16:04.383256 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-84f68f79fb-wzhl4" podStartSLOduration=1.383235061 podStartE2EDuration="1.383235061s" podCreationTimestamp="2026-01-28 15:16:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:16:04.377876665 +0000 UTC m=+775.830034956" watchObservedRunningTime="2026-01-28 15:16:04.383235061 +0000 UTC m=+775.835393322" Jan 28 15:16:06 crc kubenswrapper[4981]: I0128 15:16:06.375574 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-kggn8" event={"ID":"ee7ee978-971b-4e70-ac41-8a6c8f10b226","Type":"ContainerStarted","Data":"0bc7a48f3b709380c7312edd28ee90e6e76bd12fe34ddae89686aef9909d4d2f"} Jan 28 15:16:06 crc kubenswrapper[4981]: I0128 15:16:06.376403 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:06 crc kubenswrapper[4981]: I0128 15:16:06.378466 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t" event={"ID":"efc29e7c-2d98-40bb-8335-5c763f217be4","Type":"ContainerStarted","Data":"6b2cf2504e5f8e34505f700e63e03c7fa9a9ecdb9ef5c79352416059f8e7f28f"} Jan 28 15:16:06 crc kubenswrapper[4981]: I0128 15:16:06.378650 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t" Jan 28 15:16:06 crc kubenswrapper[4981]: I0128 15:16:06.380326 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-lps8k" event={"ID":"802eac95-d452-45f0-b0a2-765f410e4a6c","Type":"ContainerStarted","Data":"1c2c6bb28860f81f710465efae930e5a951b13fc9e53e481a9dd8113ac1af437"} Jan 28 15:16:06 crc kubenswrapper[4981]: I0128 15:16:06.397061 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-kggn8" podStartSLOduration=1.298397397 podStartE2EDuration="3.396956381s" podCreationTimestamp="2026-01-28 15:16:03 +0000 UTC" firstStartedPulling="2026-01-28 15:16:03.725973137 +0000 UTC m=+775.178131378" lastFinishedPulling="2026-01-28 15:16:05.824532121 +0000 UTC m=+777.276690362" observedRunningTime="2026-01-28 15:16:06.395485902 +0000 UTC m=+777.847644163" watchObservedRunningTime="2026-01-28 15:16:06.396956381 +0000 UTC m=+777.849114642" Jan 28 15:16:06 crc kubenswrapper[4981]: I0128 15:16:06.418550 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t" podStartSLOduration=1.6946487540000001 podStartE2EDuration="3.418512595s" podCreationTimestamp="2026-01-28 15:16:03 +0000 UTC" firstStartedPulling="2026-01-28 15:16:04.110093767 +0000 UTC m=+775.562252008" lastFinishedPulling="2026-01-28 15:16:05.833957598 +0000 UTC m=+777.286115849" observedRunningTime="2026-01-28 15:16:06.408574454 +0000 UTC m=+777.860732695" watchObservedRunningTime="2026-01-28 15:16:06.418512595 +0000 UTC m=+777.870670836" Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.008880 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ln2cr"] Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.010550 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.018134 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8kjw\" (UniqueName: \"kubernetes.io/projected/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-kube-api-access-s8kjw\") pod \"community-operators-ln2cr\" (UID: \"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e\") " pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.018261 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-catalog-content\") pod \"community-operators-ln2cr\" (UID: \"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e\") " pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.018397 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-utilities\") pod \"community-operators-ln2cr\" (UID: \"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e\") " pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.019643 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ln2cr"] Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.119574 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-utilities\") pod \"community-operators-ln2cr\" (UID: \"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e\") " pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.119657 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8kjw\" (UniqueName: \"kubernetes.io/projected/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-kube-api-access-s8kjw\") pod \"community-operators-ln2cr\" (UID: \"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e\") " pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.119687 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-catalog-content\") pod \"community-operators-ln2cr\" (UID: \"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e\") " pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.120382 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-catalog-content\") pod \"community-operators-ln2cr\" (UID: \"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e\") " pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.120539 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-utilities\") pod \"community-operators-ln2cr\" (UID: \"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e\") " pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.139288 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8kjw\" (UniqueName: \"kubernetes.io/projected/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-kube-api-access-s8kjw\") pod \"community-operators-ln2cr\" (UID: \"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e\") " pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.326878 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.408684 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb" event={"ID":"e4078e50-5cc6-45b4-8a9a-3a37c51537fa","Type":"ContainerStarted","Data":"ec9794332295163db58deded30f6d2fc962b9b55389d323b0fa305fccea2e107"} Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.669809 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-mnsnb" podStartSLOduration=1.7130572179999999 podStartE2EDuration="4.669788125s" podCreationTimestamp="2026-01-28 15:16:03 +0000 UTC" firstStartedPulling="2026-01-28 15:16:04.083934832 +0000 UTC m=+775.536093073" lastFinishedPulling="2026-01-28 15:16:07.040665739 +0000 UTC m=+778.492823980" observedRunningTime="2026-01-28 15:16:07.433602519 +0000 UTC m=+778.885760760" watchObservedRunningTime="2026-01-28 15:16:07.669788125 +0000 UTC m=+779.121946366" Jan 28 15:16:07 crc kubenswrapper[4981]: I0128 15:16:07.671893 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ln2cr"] Jan 28 15:16:07 crc kubenswrapper[4981]: W0128 15:16:07.695862 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6015b1a_c1d7_4c57_88b1_4c5e4cb0ea1e.slice/crio-a3cfcf8fc2db6f49882a9d931ef6523ec8067bb237a3f0d997d8a4faf636d87a WatchSource:0}: Error finding container a3cfcf8fc2db6f49882a9d931ef6523ec8067bb237a3f0d997d8a4faf636d87a: Status 404 returned error can't find the container with id a3cfcf8fc2db6f49882a9d931ef6523ec8067bb237a3f0d997d8a4faf636d87a Jan 28 15:16:08 crc kubenswrapper[4981]: I0128 15:16:08.415171 4981 generic.go:334] "Generic (PLEG): container finished" podID="b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e" containerID="a488bce5979c1a1adc02302bde9bd315a31a82da7c8956639315362801f66229" exitCode=0 Jan 28 15:16:08 crc kubenswrapper[4981]: I0128 15:16:08.415297 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2cr" event={"ID":"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e","Type":"ContainerDied","Data":"a488bce5979c1a1adc02302bde9bd315a31a82da7c8956639315362801f66229"} Jan 28 15:16:08 crc kubenswrapper[4981]: I0128 15:16:08.415604 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2cr" event={"ID":"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e","Type":"ContainerStarted","Data":"a3cfcf8fc2db6f49882a9d931ef6523ec8067bb237a3f0d997d8a4faf636d87a"} Jan 28 15:16:10 crc kubenswrapper[4981]: I0128 15:16:10.431258 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-lps8k" event={"ID":"802eac95-d452-45f0-b0a2-765f410e4a6c","Type":"ContainerStarted","Data":"d28a9249eb617268e1380a138fc83bed74da6d9fad4825ed65900cd291a1d719"} Jan 28 15:16:10 crc kubenswrapper[4981]: I0128 15:16:10.437170 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2cr" event={"ID":"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e","Type":"ContainerStarted","Data":"2b755ac142cb45c389b072c27e26efdd0d8d593748905569d16c05bea92f88ae"} Jan 28 15:16:10 crc kubenswrapper[4981]: I0128 15:16:10.453418 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-lps8k" podStartSLOduration=1.404385962 podStartE2EDuration="7.453396494s" podCreationTimestamp="2026-01-28 15:16:03 +0000 UTC" firstStartedPulling="2026-01-28 15:16:03.946236045 +0000 UTC m=+775.398394286" lastFinishedPulling="2026-01-28 15:16:09.995246557 +0000 UTC m=+781.447404818" observedRunningTime="2026-01-28 15:16:10.45305093 +0000 UTC m=+781.905209191" watchObservedRunningTime="2026-01-28 15:16:10.453396494 +0000 UTC m=+781.905554755" Jan 28 15:16:11 crc kubenswrapper[4981]: I0128 15:16:11.445273 4981 generic.go:334] "Generic (PLEG): container finished" podID="b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e" containerID="2b755ac142cb45c389b072c27e26efdd0d8d593748905569d16c05bea92f88ae" exitCode=0 Jan 28 15:16:11 crc kubenswrapper[4981]: I0128 15:16:11.445376 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2cr" event={"ID":"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e","Type":"ContainerDied","Data":"2b755ac142cb45c389b072c27e26efdd0d8d593748905569d16c05bea92f88ae"} Jan 28 15:16:11 crc kubenswrapper[4981]: I0128 15:16:11.871300 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:16:11 crc kubenswrapper[4981]: I0128 15:16:11.950100 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:16:12 crc kubenswrapper[4981]: I0128 15:16:12.456701 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2cr" event={"ID":"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e","Type":"ContainerStarted","Data":"8aa588164bd89a4f925a17a9561d809db7c34ae8d8ed5a1991d0247004ecec37"} Jan 28 15:16:12 crc kubenswrapper[4981]: I0128 15:16:12.483009 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ln2cr" podStartSLOduration=3.163784198 podStartE2EDuration="6.482980063s" podCreationTimestamp="2026-01-28 15:16:06 +0000 UTC" firstStartedPulling="2026-01-28 15:16:08.513501874 +0000 UTC m=+779.965660145" lastFinishedPulling="2026-01-28 15:16:11.832697759 +0000 UTC m=+783.284856010" observedRunningTime="2026-01-28 15:16:12.47813808 +0000 UTC m=+783.930296371" watchObservedRunningTime="2026-01-28 15:16:12.482980063 +0000 UTC m=+783.935138344" Jan 28 15:16:13 crc kubenswrapper[4981]: I0128 15:16:13.707560 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-kggn8" Jan 28 15:16:13 crc kubenswrapper[4981]: I0128 15:16:13.981683 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:13 crc kubenswrapper[4981]: I0128 15:16:13.982000 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:13 crc kubenswrapper[4981]: I0128 15:16:13.991490 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.180021 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mhc2x"] Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.180342 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mhc2x" podUID="4619cd2e-105f-434b-a0a9-085eef609c9f" containerName="registry-server" containerID="cri-o://7daaa66a5a1a9121e238d3dcd28c93ec834b73de2ba35897306061d29fd1c28d" gracePeriod=2 Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.471990 4981 generic.go:334] "Generic (PLEG): container finished" podID="4619cd2e-105f-434b-a0a9-085eef609c9f" containerID="7daaa66a5a1a9121e238d3dcd28c93ec834b73de2ba35897306061d29fd1c28d" exitCode=0 Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.472179 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhc2x" event={"ID":"4619cd2e-105f-434b-a0a9-085eef609c9f","Type":"ContainerDied","Data":"7daaa66a5a1a9121e238d3dcd28c93ec834b73de2ba35897306061d29fd1c28d"} Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.475877 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-84f68f79fb-wzhl4" Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.520274 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vc85q"] Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.599354 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.735029 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4619cd2e-105f-434b-a0a9-085eef609c9f-catalog-content\") pod \"4619cd2e-105f-434b-a0a9-085eef609c9f\" (UID: \"4619cd2e-105f-434b-a0a9-085eef609c9f\") " Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.735131 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4619cd2e-105f-434b-a0a9-085eef609c9f-utilities\") pod \"4619cd2e-105f-434b-a0a9-085eef609c9f\" (UID: \"4619cd2e-105f-434b-a0a9-085eef609c9f\") " Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.735156 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbtgv\" (UniqueName: \"kubernetes.io/projected/4619cd2e-105f-434b-a0a9-085eef609c9f-kube-api-access-dbtgv\") pod \"4619cd2e-105f-434b-a0a9-085eef609c9f\" (UID: \"4619cd2e-105f-434b-a0a9-085eef609c9f\") " Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.736560 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4619cd2e-105f-434b-a0a9-085eef609c9f-utilities" (OuterVolumeSpecName: "utilities") pod "4619cd2e-105f-434b-a0a9-085eef609c9f" (UID: "4619cd2e-105f-434b-a0a9-085eef609c9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.745397 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4619cd2e-105f-434b-a0a9-085eef609c9f-kube-api-access-dbtgv" (OuterVolumeSpecName: "kube-api-access-dbtgv") pod "4619cd2e-105f-434b-a0a9-085eef609c9f" (UID: "4619cd2e-105f-434b-a0a9-085eef609c9f"). InnerVolumeSpecName "kube-api-access-dbtgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.836626 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4619cd2e-105f-434b-a0a9-085eef609c9f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.836666 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbtgv\" (UniqueName: \"kubernetes.io/projected/4619cd2e-105f-434b-a0a9-085eef609c9f-kube-api-access-dbtgv\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.837845 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4619cd2e-105f-434b-a0a9-085eef609c9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4619cd2e-105f-434b-a0a9-085eef609c9f" (UID: "4619cd2e-105f-434b-a0a9-085eef609c9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:16:14 crc kubenswrapper[4981]: I0128 15:16:14.937961 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4619cd2e-105f-434b-a0a9-085eef609c9f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:15 crc kubenswrapper[4981]: I0128 15:16:15.485788 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mhc2x" event={"ID":"4619cd2e-105f-434b-a0a9-085eef609c9f","Type":"ContainerDied","Data":"01e54184a3505415085be78d793d673a504f14fe393f12f50306a8ce241d331f"} Jan 28 15:16:15 crc kubenswrapper[4981]: I0128 15:16:15.485836 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mhc2x" Jan 28 15:16:15 crc kubenswrapper[4981]: I0128 15:16:15.485899 4981 scope.go:117] "RemoveContainer" containerID="7daaa66a5a1a9121e238d3dcd28c93ec834b73de2ba35897306061d29fd1c28d" Jan 28 15:16:15 crc kubenswrapper[4981]: I0128 15:16:15.516947 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mhc2x"] Jan 28 15:16:15 crc kubenswrapper[4981]: I0128 15:16:15.517721 4981 scope.go:117] "RemoveContainer" containerID="50a81a27d2922082a70c8212fbf27d982f62edf3bc2d277cff5d780ee361354e" Jan 28 15:16:15 crc kubenswrapper[4981]: I0128 15:16:15.522450 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mhc2x"] Jan 28 15:16:15 crc kubenswrapper[4981]: I0128 15:16:15.539766 4981 scope.go:117] "RemoveContainer" containerID="f1454339e50b20d41a5724f409ff2c0ffacb3d8c22765c9f146d6f5fd189e9da" Jan 28 15:16:17 crc kubenswrapper[4981]: I0128 15:16:17.330672 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4619cd2e-105f-434b-a0a9-085eef609c9f" path="/var/lib/kubelet/pods/4619cd2e-105f-434b-a0a9-085eef609c9f/volumes" Jan 28 15:16:17 crc kubenswrapper[4981]: I0128 15:16:17.332381 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:17 crc kubenswrapper[4981]: I0128 15:16:17.332450 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:17 crc kubenswrapper[4981]: I0128 15:16:17.396958 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:17 crc kubenswrapper[4981]: I0128 15:16:17.564049 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:18 crc kubenswrapper[4981]: I0128 15:16:18.582552 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ln2cr"] Jan 28 15:16:19 crc kubenswrapper[4981]: I0128 15:16:19.521373 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ln2cr" podUID="b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e" containerName="registry-server" containerID="cri-o://8aa588164bd89a4f925a17a9561d809db7c34ae8d8ed5a1991d0247004ecec37" gracePeriod=2 Jan 28 15:16:19 crc kubenswrapper[4981]: I0128 15:16:19.897776 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:16:19 crc kubenswrapper[4981]: I0128 15:16:19.898351 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:16:19 crc kubenswrapper[4981]: I0128 15:16:19.939261 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.021374 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-utilities\") pod \"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e\" (UID: \"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e\") " Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.021436 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-catalog-content\") pod \"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e\" (UID: \"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e\") " Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.021597 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8kjw\" (UniqueName: \"kubernetes.io/projected/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-kube-api-access-s8kjw\") pod \"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e\" (UID: \"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e\") " Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.022438 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-utilities" (OuterVolumeSpecName: "utilities") pod "b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e" (UID: "b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.034860 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-kube-api-access-s8kjw" (OuterVolumeSpecName: "kube-api-access-s8kjw") pod "b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e" (UID: "b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e"). InnerVolumeSpecName "kube-api-access-s8kjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.077085 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e" (UID: "b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.122452 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8kjw\" (UniqueName: \"kubernetes.io/projected/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-kube-api-access-s8kjw\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.122510 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.122527 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.530691 4981 generic.go:334] "Generic (PLEG): container finished" podID="b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e" containerID="8aa588164bd89a4f925a17a9561d809db7c34ae8d8ed5a1991d0247004ecec37" exitCode=0 Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.530751 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2cr" event={"ID":"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e","Type":"ContainerDied","Data":"8aa588164bd89a4f925a17a9561d809db7c34ae8d8ed5a1991d0247004ecec37"} Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.530769 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ln2cr" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.530804 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2cr" event={"ID":"b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e","Type":"ContainerDied","Data":"a3cfcf8fc2db6f49882a9d931ef6523ec8067bb237a3f0d997d8a4faf636d87a"} Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.530828 4981 scope.go:117] "RemoveContainer" containerID="8aa588164bd89a4f925a17a9561d809db7c34ae8d8ed5a1991d0247004ecec37" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.562121 4981 scope.go:117] "RemoveContainer" containerID="2b755ac142cb45c389b072c27e26efdd0d8d593748905569d16c05bea92f88ae" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.574758 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ln2cr"] Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.578682 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ln2cr"] Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.605722 4981 scope.go:117] "RemoveContainer" containerID="a488bce5979c1a1adc02302bde9bd315a31a82da7c8956639315362801f66229" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.625759 4981 scope.go:117] "RemoveContainer" containerID="8aa588164bd89a4f925a17a9561d809db7c34ae8d8ed5a1991d0247004ecec37" Jan 28 15:16:20 crc kubenswrapper[4981]: E0128 15:16:20.626384 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aa588164bd89a4f925a17a9561d809db7c34ae8d8ed5a1991d0247004ecec37\": container with ID starting with 8aa588164bd89a4f925a17a9561d809db7c34ae8d8ed5a1991d0247004ecec37 not found: ID does not exist" containerID="8aa588164bd89a4f925a17a9561d809db7c34ae8d8ed5a1991d0247004ecec37" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.626484 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aa588164bd89a4f925a17a9561d809db7c34ae8d8ed5a1991d0247004ecec37"} err="failed to get container status \"8aa588164bd89a4f925a17a9561d809db7c34ae8d8ed5a1991d0247004ecec37\": rpc error: code = NotFound desc = could not find container \"8aa588164bd89a4f925a17a9561d809db7c34ae8d8ed5a1991d0247004ecec37\": container with ID starting with 8aa588164bd89a4f925a17a9561d809db7c34ae8d8ed5a1991d0247004ecec37 not found: ID does not exist" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.626579 4981 scope.go:117] "RemoveContainer" containerID="2b755ac142cb45c389b072c27e26efdd0d8d593748905569d16c05bea92f88ae" Jan 28 15:16:20 crc kubenswrapper[4981]: E0128 15:16:20.627080 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b755ac142cb45c389b072c27e26efdd0d8d593748905569d16c05bea92f88ae\": container with ID starting with 2b755ac142cb45c389b072c27e26efdd0d8d593748905569d16c05bea92f88ae not found: ID does not exist" containerID="2b755ac142cb45c389b072c27e26efdd0d8d593748905569d16c05bea92f88ae" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.627211 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b755ac142cb45c389b072c27e26efdd0d8d593748905569d16c05bea92f88ae"} err="failed to get container status \"2b755ac142cb45c389b072c27e26efdd0d8d593748905569d16c05bea92f88ae\": rpc error: code = NotFound desc = could not find container \"2b755ac142cb45c389b072c27e26efdd0d8d593748905569d16c05bea92f88ae\": container with ID starting with 2b755ac142cb45c389b072c27e26efdd0d8d593748905569d16c05bea92f88ae not found: ID does not exist" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.627283 4981 scope.go:117] "RemoveContainer" containerID="a488bce5979c1a1adc02302bde9bd315a31a82da7c8956639315362801f66229" Jan 28 15:16:20 crc kubenswrapper[4981]: E0128 15:16:20.627740 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a488bce5979c1a1adc02302bde9bd315a31a82da7c8956639315362801f66229\": container with ID starting with a488bce5979c1a1adc02302bde9bd315a31a82da7c8956639315362801f66229 not found: ID does not exist" containerID="a488bce5979c1a1adc02302bde9bd315a31a82da7c8956639315362801f66229" Jan 28 15:16:20 crc kubenswrapper[4981]: I0128 15:16:20.627833 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a488bce5979c1a1adc02302bde9bd315a31a82da7c8956639315362801f66229"} err="failed to get container status \"a488bce5979c1a1adc02302bde9bd315a31a82da7c8956639315362801f66229\": rpc error: code = NotFound desc = could not find container \"a488bce5979c1a1adc02302bde9bd315a31a82da7c8956639315362801f66229\": container with ID starting with a488bce5979c1a1adc02302bde9bd315a31a82da7c8956639315362801f66229 not found: ID does not exist" Jan 28 15:16:21 crc kubenswrapper[4981]: I0128 15:16:21.337073 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e" path="/var/lib/kubelet/pods/b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e/volumes" Jan 28 15:16:23 crc kubenswrapper[4981]: I0128 15:16:23.667557 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-knc2t" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.130412 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg"] Jan 28 15:16:37 crc kubenswrapper[4981]: E0128 15:16:37.131358 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4619cd2e-105f-434b-a0a9-085eef609c9f" containerName="extract-utilities" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.131389 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4619cd2e-105f-434b-a0a9-085eef609c9f" containerName="extract-utilities" Jan 28 15:16:37 crc kubenswrapper[4981]: E0128 15:16:37.131407 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4619cd2e-105f-434b-a0a9-085eef609c9f" containerName="registry-server" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.131415 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4619cd2e-105f-434b-a0a9-085eef609c9f" containerName="registry-server" Jan 28 15:16:37 crc kubenswrapper[4981]: E0128 15:16:37.131425 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e" containerName="extract-content" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.131433 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e" containerName="extract-content" Jan 28 15:16:37 crc kubenswrapper[4981]: E0128 15:16:37.131450 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4619cd2e-105f-434b-a0a9-085eef609c9f" containerName="extract-content" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.131459 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4619cd2e-105f-434b-a0a9-085eef609c9f" containerName="extract-content" Jan 28 15:16:37 crc kubenswrapper[4981]: E0128 15:16:37.131472 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e" containerName="registry-server" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.131480 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e" containerName="registry-server" Jan 28 15:16:37 crc kubenswrapper[4981]: E0128 15:16:37.131491 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e" containerName="extract-utilities" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.131498 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e" containerName="extract-utilities" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.131627 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="4619cd2e-105f-434b-a0a9-085eef609c9f" containerName="registry-server" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.131649 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6015b1a-c1d7-4c57-88b1-4c5e4cb0ea1e" containerName="registry-server" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.132586 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.140782 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.145138 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg"] Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.195035 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmqq5\" (UniqueName: \"kubernetes.io/projected/34033ece-4d02-4648-9025-0642096f42d3-kube-api-access-gmqq5\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg\" (UID: \"34033ece-4d02-4648-9025-0642096f42d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.195100 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34033ece-4d02-4648-9025-0642096f42d3-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg\" (UID: \"34033ece-4d02-4648-9025-0642096f42d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.195170 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34033ece-4d02-4648-9025-0642096f42d3-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg\" (UID: \"34033ece-4d02-4648-9025-0642096f42d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.296625 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmqq5\" (UniqueName: \"kubernetes.io/projected/34033ece-4d02-4648-9025-0642096f42d3-kube-api-access-gmqq5\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg\" (UID: \"34033ece-4d02-4648-9025-0642096f42d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.296718 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34033ece-4d02-4648-9025-0642096f42d3-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg\" (UID: \"34033ece-4d02-4648-9025-0642096f42d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.296780 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34033ece-4d02-4648-9025-0642096f42d3-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg\" (UID: \"34033ece-4d02-4648-9025-0642096f42d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.297337 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34033ece-4d02-4648-9025-0642096f42d3-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg\" (UID: \"34033ece-4d02-4648-9025-0642096f42d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.297499 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34033ece-4d02-4648-9025-0642096f42d3-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg\" (UID: \"34033ece-4d02-4648-9025-0642096f42d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.319721 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmqq5\" (UniqueName: \"kubernetes.io/projected/34033ece-4d02-4648-9025-0642096f42d3-kube-api-access-gmqq5\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg\" (UID: \"34033ece-4d02-4648-9025-0642096f42d3\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.451456 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" Jan 28 15:16:37 crc kubenswrapper[4981]: I0128 15:16:37.903611 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg"] Jan 28 15:16:38 crc kubenswrapper[4981]: I0128 15:16:38.653696 4981 generic.go:334] "Generic (PLEG): container finished" podID="34033ece-4d02-4648-9025-0642096f42d3" containerID="90d57283cacfaf859f2eef34ae01f83a65f11b948644543b3e06b7485c16b681" exitCode=0 Jan 28 15:16:38 crc kubenswrapper[4981]: I0128 15:16:38.653740 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" event={"ID":"34033ece-4d02-4648-9025-0642096f42d3","Type":"ContainerDied","Data":"90d57283cacfaf859f2eef34ae01f83a65f11b948644543b3e06b7485c16b681"} Jan 28 15:16:38 crc kubenswrapper[4981]: I0128 15:16:38.653767 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" event={"ID":"34033ece-4d02-4648-9025-0642096f42d3","Type":"ContainerStarted","Data":"ac95c19ad969f394022387c95f11715fac2177dd308f0f98ab39a5356fee62d7"} Jan 28 15:16:39 crc kubenswrapper[4981]: I0128 15:16:39.570556 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-vc85q" podUID="5c29d863-f1a8-42dc-8916-988d6d45f3d9" containerName="console" containerID="cri-o://4ce35066677641ba023298363c67a98b43fe9954b1d6c00f6a1a900ab44d81ed" gracePeriod=15 Jan 28 15:16:39 crc kubenswrapper[4981]: I0128 15:16:39.800845 4981 patch_prober.go:28] interesting pod/console-f9d7485db-vc85q container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 28 15:16:39 crc kubenswrapper[4981]: I0128 15:16:39.800922 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-vc85q" podUID="5c29d863-f1a8-42dc-8916-988d6d45f3d9" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 28 15:16:39 crc kubenswrapper[4981]: I0128 15:16:39.993703 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vc85q_5c29d863-f1a8-42dc-8916-988d6d45f3d9/console/0.log" Jan 28 15:16:39 crc kubenswrapper[4981]: I0128 15:16:39.993762 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.036706 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-serving-cert\") pod \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.036782 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-oauth-config\") pod \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.036816 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-oauth-serving-cert\") pod \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.036858 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtt5r\" (UniqueName: \"kubernetes.io/projected/5c29d863-f1a8-42dc-8916-988d6d45f3d9-kube-api-access-qtt5r\") pod \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.036890 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-config\") pod \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.036924 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-trusted-ca-bundle\") pod \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.036963 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-service-ca\") pod \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\" (UID: \"5c29d863-f1a8-42dc-8916-988d6d45f3d9\") " Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.038418 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-config" (OuterVolumeSpecName: "console-config") pod "5c29d863-f1a8-42dc-8916-988d6d45f3d9" (UID: "5c29d863-f1a8-42dc-8916-988d6d45f3d9"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.038550 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-service-ca" (OuterVolumeSpecName: "service-ca") pod "5c29d863-f1a8-42dc-8916-988d6d45f3d9" (UID: "5c29d863-f1a8-42dc-8916-988d6d45f3d9"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.038613 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5c29d863-f1a8-42dc-8916-988d6d45f3d9" (UID: "5c29d863-f1a8-42dc-8916-988d6d45f3d9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.038697 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5c29d863-f1a8-42dc-8916-988d6d45f3d9" (UID: "5c29d863-f1a8-42dc-8916-988d6d45f3d9"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.042729 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c29d863-f1a8-42dc-8916-988d6d45f3d9-kube-api-access-qtt5r" (OuterVolumeSpecName: "kube-api-access-qtt5r") pod "5c29d863-f1a8-42dc-8916-988d6d45f3d9" (UID: "5c29d863-f1a8-42dc-8916-988d6d45f3d9"). InnerVolumeSpecName "kube-api-access-qtt5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.043088 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5c29d863-f1a8-42dc-8916-988d6d45f3d9" (UID: "5c29d863-f1a8-42dc-8916-988d6d45f3d9"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.043417 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5c29d863-f1a8-42dc-8916-988d6d45f3d9" (UID: "5c29d863-f1a8-42dc-8916-988d6d45f3d9"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.138701 4981 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.139306 4981 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.139355 4981 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.139376 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtt5r\" (UniqueName: \"kubernetes.io/projected/5c29d863-f1a8-42dc-8916-988d6d45f3d9-kube-api-access-qtt5r\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.139395 4981 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.139412 4981 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.139429 4981 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5c29d863-f1a8-42dc-8916-988d6d45f3d9-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.678756 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-vc85q_5c29d863-f1a8-42dc-8916-988d6d45f3d9/console/0.log" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.678820 4981 generic.go:334] "Generic (PLEG): container finished" podID="5c29d863-f1a8-42dc-8916-988d6d45f3d9" containerID="4ce35066677641ba023298363c67a98b43fe9954b1d6c00f6a1a900ab44d81ed" exitCode=2 Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.678858 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vc85q" event={"ID":"5c29d863-f1a8-42dc-8916-988d6d45f3d9","Type":"ContainerDied","Data":"4ce35066677641ba023298363c67a98b43fe9954b1d6c00f6a1a900ab44d81ed"} Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.678899 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-vc85q" event={"ID":"5c29d863-f1a8-42dc-8916-988d6d45f3d9","Type":"ContainerDied","Data":"c9e603e551d2ec7f53014326d6f18c85b0c3e79a2bd4e8a712d3543ee3bda684"} Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.678920 4981 scope.go:117] "RemoveContainer" containerID="4ce35066677641ba023298363c67a98b43fe9954b1d6c00f6a1a900ab44d81ed" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.678977 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-vc85q" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.706950 4981 scope.go:117] "RemoveContainer" containerID="4ce35066677641ba023298363c67a98b43fe9954b1d6c00f6a1a900ab44d81ed" Jan 28 15:16:40 crc kubenswrapper[4981]: E0128 15:16:40.707627 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ce35066677641ba023298363c67a98b43fe9954b1d6c00f6a1a900ab44d81ed\": container with ID starting with 4ce35066677641ba023298363c67a98b43fe9954b1d6c00f6a1a900ab44d81ed not found: ID does not exist" containerID="4ce35066677641ba023298363c67a98b43fe9954b1d6c00f6a1a900ab44d81ed" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.707685 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ce35066677641ba023298363c67a98b43fe9954b1d6c00f6a1a900ab44d81ed"} err="failed to get container status \"4ce35066677641ba023298363c67a98b43fe9954b1d6c00f6a1a900ab44d81ed\": rpc error: code = NotFound desc = could not find container \"4ce35066677641ba023298363c67a98b43fe9954b1d6c00f6a1a900ab44d81ed\": container with ID starting with 4ce35066677641ba023298363c67a98b43fe9954b1d6c00f6a1a900ab44d81ed not found: ID does not exist" Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.720371 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-vc85q"] Jan 28 15:16:40 crc kubenswrapper[4981]: I0128 15:16:40.725017 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-vc85q"] Jan 28 15:16:41 crc kubenswrapper[4981]: I0128 15:16:41.327853 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c29d863-f1a8-42dc-8916-988d6d45f3d9" path="/var/lib/kubelet/pods/5c29d863-f1a8-42dc-8916-988d6d45f3d9/volumes" Jan 28 15:16:48 crc kubenswrapper[4981]: I0128 15:16:48.821821 4981 generic.go:334] "Generic (PLEG): container finished" podID="34033ece-4d02-4648-9025-0642096f42d3" containerID="af20aaf3f38f3cddeebf01078abac61d6bffc70be8482806e66025f3c9fa8b99" exitCode=0 Jan 28 15:16:48 crc kubenswrapper[4981]: I0128 15:16:48.821943 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" event={"ID":"34033ece-4d02-4648-9025-0642096f42d3","Type":"ContainerDied","Data":"af20aaf3f38f3cddeebf01078abac61d6bffc70be8482806e66025f3c9fa8b99"} Jan 28 15:16:49 crc kubenswrapper[4981]: I0128 15:16:49.835397 4981 generic.go:334] "Generic (PLEG): container finished" podID="34033ece-4d02-4648-9025-0642096f42d3" containerID="071353d5270476ae0ced5276b381f8fce35523710780ff604333b3f0ed0bd407" exitCode=0 Jan 28 15:16:49 crc kubenswrapper[4981]: I0128 15:16:49.835544 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" event={"ID":"34033ece-4d02-4648-9025-0642096f42d3","Type":"ContainerDied","Data":"071353d5270476ae0ced5276b381f8fce35523710780ff604333b3f0ed0bd407"} Jan 28 15:16:49 crc kubenswrapper[4981]: I0128 15:16:49.897522 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:16:49 crc kubenswrapper[4981]: I0128 15:16:49.897634 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:16:51 crc kubenswrapper[4981]: I0128 15:16:51.180055 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" Jan 28 15:16:51 crc kubenswrapper[4981]: I0128 15:16:51.202898 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34033ece-4d02-4648-9025-0642096f42d3-util\") pod \"34033ece-4d02-4648-9025-0642096f42d3\" (UID: \"34033ece-4d02-4648-9025-0642096f42d3\") " Jan 28 15:16:51 crc kubenswrapper[4981]: I0128 15:16:51.203021 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmqq5\" (UniqueName: \"kubernetes.io/projected/34033ece-4d02-4648-9025-0642096f42d3-kube-api-access-gmqq5\") pod \"34033ece-4d02-4648-9025-0642096f42d3\" (UID: \"34033ece-4d02-4648-9025-0642096f42d3\") " Jan 28 15:16:51 crc kubenswrapper[4981]: I0128 15:16:51.203067 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34033ece-4d02-4648-9025-0642096f42d3-bundle\") pod \"34033ece-4d02-4648-9025-0642096f42d3\" (UID: \"34033ece-4d02-4648-9025-0642096f42d3\") " Jan 28 15:16:51 crc kubenswrapper[4981]: I0128 15:16:51.205150 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34033ece-4d02-4648-9025-0642096f42d3-bundle" (OuterVolumeSpecName: "bundle") pod "34033ece-4d02-4648-9025-0642096f42d3" (UID: "34033ece-4d02-4648-9025-0642096f42d3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:16:51 crc kubenswrapper[4981]: I0128 15:16:51.220143 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34033ece-4d02-4648-9025-0642096f42d3-kube-api-access-gmqq5" (OuterVolumeSpecName: "kube-api-access-gmqq5") pod "34033ece-4d02-4648-9025-0642096f42d3" (UID: "34033ece-4d02-4648-9025-0642096f42d3"). InnerVolumeSpecName "kube-api-access-gmqq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:16:51 crc kubenswrapper[4981]: I0128 15:16:51.222804 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34033ece-4d02-4648-9025-0642096f42d3-util" (OuterVolumeSpecName: "util") pod "34033ece-4d02-4648-9025-0642096f42d3" (UID: "34033ece-4d02-4648-9025-0642096f42d3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:16:51 crc kubenswrapper[4981]: I0128 15:16:51.304707 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmqq5\" (UniqueName: \"kubernetes.io/projected/34033ece-4d02-4648-9025-0642096f42d3-kube-api-access-gmqq5\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:51 crc kubenswrapper[4981]: I0128 15:16:51.304775 4981 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34033ece-4d02-4648-9025-0642096f42d3-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:51 crc kubenswrapper[4981]: I0128 15:16:51.304794 4981 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34033ece-4d02-4648-9025-0642096f42d3-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:16:51 crc kubenswrapper[4981]: I0128 15:16:51.859983 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" event={"ID":"34033ece-4d02-4648-9025-0642096f42d3","Type":"ContainerDied","Data":"ac95c19ad969f394022387c95f11715fac2177dd308f0f98ab39a5356fee62d7"} Jan 28 15:16:51 crc kubenswrapper[4981]: I0128 15:16:51.860039 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac95c19ad969f394022387c95f11715fac2177dd308f0f98ab39a5356fee62d7" Jan 28 15:16:51 crc kubenswrapper[4981]: I0128 15:16:51.860120 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.424837 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-599f895949-pcz4b"] Jan 28 15:17:00 crc kubenswrapper[4981]: E0128 15:17:00.426201 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34033ece-4d02-4648-9025-0642096f42d3" containerName="pull" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.426277 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="34033ece-4d02-4648-9025-0642096f42d3" containerName="pull" Jan 28 15:17:00 crc kubenswrapper[4981]: E0128 15:17:00.426339 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c29d863-f1a8-42dc-8916-988d6d45f3d9" containerName="console" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.426390 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c29d863-f1a8-42dc-8916-988d6d45f3d9" containerName="console" Jan 28 15:17:00 crc kubenswrapper[4981]: E0128 15:17:00.426446 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34033ece-4d02-4648-9025-0642096f42d3" containerName="util" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.426497 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="34033ece-4d02-4648-9025-0642096f42d3" containerName="util" Jan 28 15:17:00 crc kubenswrapper[4981]: E0128 15:17:00.426555 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34033ece-4d02-4648-9025-0642096f42d3" containerName="extract" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.426604 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="34033ece-4d02-4648-9025-0642096f42d3" containerName="extract" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.426743 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="34033ece-4d02-4648-9025-0642096f42d3" containerName="extract" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.426805 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c29d863-f1a8-42dc-8916-988d6d45f3d9" containerName="console" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.427223 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.428877 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.429576 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.429701 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.429912 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-42qxv" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.429957 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.450052 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-599f895949-pcz4b"] Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.538117 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a0d4786-f200-41af-b16c-23528e0537dd-webhook-cert\") pod \"metallb-operator-controller-manager-599f895949-pcz4b\" (UID: \"0a0d4786-f200-41af-b16c-23528e0537dd\") " pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.538293 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a0d4786-f200-41af-b16c-23528e0537dd-apiservice-cert\") pod \"metallb-operator-controller-manager-599f895949-pcz4b\" (UID: \"0a0d4786-f200-41af-b16c-23528e0537dd\") " pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.538328 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz2wc\" (UniqueName: \"kubernetes.io/projected/0a0d4786-f200-41af-b16c-23528e0537dd-kube-api-access-pz2wc\") pod \"metallb-operator-controller-manager-599f895949-pcz4b\" (UID: \"0a0d4786-f200-41af-b16c-23528e0537dd\") " pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.638973 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a0d4786-f200-41af-b16c-23528e0537dd-apiservice-cert\") pod \"metallb-operator-controller-manager-599f895949-pcz4b\" (UID: \"0a0d4786-f200-41af-b16c-23528e0537dd\") " pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.639392 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz2wc\" (UniqueName: \"kubernetes.io/projected/0a0d4786-f200-41af-b16c-23528e0537dd-kube-api-access-pz2wc\") pod \"metallb-operator-controller-manager-599f895949-pcz4b\" (UID: \"0a0d4786-f200-41af-b16c-23528e0537dd\") " pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.639424 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a0d4786-f200-41af-b16c-23528e0537dd-webhook-cert\") pod \"metallb-operator-controller-manager-599f895949-pcz4b\" (UID: \"0a0d4786-f200-41af-b16c-23528e0537dd\") " pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.647412 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0a0d4786-f200-41af-b16c-23528e0537dd-webhook-cert\") pod \"metallb-operator-controller-manager-599f895949-pcz4b\" (UID: \"0a0d4786-f200-41af-b16c-23528e0537dd\") " pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.649837 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0a0d4786-f200-41af-b16c-23528e0537dd-apiservice-cert\") pod \"metallb-operator-controller-manager-599f895949-pcz4b\" (UID: \"0a0d4786-f200-41af-b16c-23528e0537dd\") " pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.657708 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz2wc\" (UniqueName: \"kubernetes.io/projected/0a0d4786-f200-41af-b16c-23528e0537dd-kube-api-access-pz2wc\") pod \"metallb-operator-controller-manager-599f895949-pcz4b\" (UID: \"0a0d4786-f200-41af-b16c-23528e0537dd\") " pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.740994 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.900515 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc"] Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.901817 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.906344 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.906354 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.906485 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-8d92x" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.907604 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc"] Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.980730 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd39e6ba-068e-4ce1-936b-15b3c003cd04-webhook-cert\") pod \"metallb-operator-webhook-server-59465cf79b-kmxjc\" (UID: \"bd39e6ba-068e-4ce1-936b-15b3c003cd04\") " pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.980809 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd39e6ba-068e-4ce1-936b-15b3c003cd04-apiservice-cert\") pod \"metallb-operator-webhook-server-59465cf79b-kmxjc\" (UID: \"bd39e6ba-068e-4ce1-936b-15b3c003cd04\") " pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" Jan 28 15:17:00 crc kubenswrapper[4981]: I0128 15:17:00.980842 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg2gb\" (UniqueName: \"kubernetes.io/projected/bd39e6ba-068e-4ce1-936b-15b3c003cd04-kube-api-access-mg2gb\") pod \"metallb-operator-webhook-server-59465cf79b-kmxjc\" (UID: \"bd39e6ba-068e-4ce1-936b-15b3c003cd04\") " pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" Jan 28 15:17:01 crc kubenswrapper[4981]: I0128 15:17:01.081530 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd39e6ba-068e-4ce1-936b-15b3c003cd04-webhook-cert\") pod \"metallb-operator-webhook-server-59465cf79b-kmxjc\" (UID: \"bd39e6ba-068e-4ce1-936b-15b3c003cd04\") " pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" Jan 28 15:17:01 crc kubenswrapper[4981]: I0128 15:17:01.081619 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd39e6ba-068e-4ce1-936b-15b3c003cd04-apiservice-cert\") pod \"metallb-operator-webhook-server-59465cf79b-kmxjc\" (UID: \"bd39e6ba-068e-4ce1-936b-15b3c003cd04\") " pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" Jan 28 15:17:01 crc kubenswrapper[4981]: I0128 15:17:01.081657 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg2gb\" (UniqueName: \"kubernetes.io/projected/bd39e6ba-068e-4ce1-936b-15b3c003cd04-kube-api-access-mg2gb\") pod \"metallb-operator-webhook-server-59465cf79b-kmxjc\" (UID: \"bd39e6ba-068e-4ce1-936b-15b3c003cd04\") " pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" Jan 28 15:17:01 crc kubenswrapper[4981]: I0128 15:17:01.089010 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd39e6ba-068e-4ce1-936b-15b3c003cd04-apiservice-cert\") pod \"metallb-operator-webhook-server-59465cf79b-kmxjc\" (UID: \"bd39e6ba-068e-4ce1-936b-15b3c003cd04\") " pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" Jan 28 15:17:01 crc kubenswrapper[4981]: I0128 15:17:01.097094 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg2gb\" (UniqueName: \"kubernetes.io/projected/bd39e6ba-068e-4ce1-936b-15b3c003cd04-kube-api-access-mg2gb\") pod \"metallb-operator-webhook-server-59465cf79b-kmxjc\" (UID: \"bd39e6ba-068e-4ce1-936b-15b3c003cd04\") " pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" Jan 28 15:17:01 crc kubenswrapper[4981]: I0128 15:17:01.100289 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd39e6ba-068e-4ce1-936b-15b3c003cd04-webhook-cert\") pod \"metallb-operator-webhook-server-59465cf79b-kmxjc\" (UID: \"bd39e6ba-068e-4ce1-936b-15b3c003cd04\") " pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" Jan 28 15:17:01 crc kubenswrapper[4981]: I0128 15:17:01.229974 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" Jan 28 15:17:01 crc kubenswrapper[4981]: I0128 15:17:01.251856 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-599f895949-pcz4b"] Jan 28 15:17:01 crc kubenswrapper[4981]: W0128 15:17:01.264050 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a0d4786_f200_41af_b16c_23528e0537dd.slice/crio-ad52d03f202dbdab7ec29ce58fa19be4a337522efcf11b7777c598017f2a0b8d WatchSource:0}: Error finding container ad52d03f202dbdab7ec29ce58fa19be4a337522efcf11b7777c598017f2a0b8d: Status 404 returned error can't find the container with id ad52d03f202dbdab7ec29ce58fa19be4a337522efcf11b7777c598017f2a0b8d Jan 28 15:17:01 crc kubenswrapper[4981]: I0128 15:17:01.478608 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc"] Jan 28 15:17:01 crc kubenswrapper[4981]: W0128 15:17:01.484972 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd39e6ba_068e_4ce1_936b_15b3c003cd04.slice/crio-23f983ea48565abe3e5da809f0e83f3f4ad5a33e5e82978977c931e84a019039 WatchSource:0}: Error finding container 23f983ea48565abe3e5da809f0e83f3f4ad5a33e5e82978977c931e84a019039: Status 404 returned error can't find the container with id 23f983ea48565abe3e5da809f0e83f3f4ad5a33e5e82978977c931e84a019039 Jan 28 15:17:01 crc kubenswrapper[4981]: I0128 15:17:01.919895 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" event={"ID":"bd39e6ba-068e-4ce1-936b-15b3c003cd04","Type":"ContainerStarted","Data":"23f983ea48565abe3e5da809f0e83f3f4ad5a33e5e82978977c931e84a019039"} Jan 28 15:17:01 crc kubenswrapper[4981]: I0128 15:17:01.921693 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" event={"ID":"0a0d4786-f200-41af-b16c-23528e0537dd","Type":"ContainerStarted","Data":"ad52d03f202dbdab7ec29ce58fa19be4a337522efcf11b7777c598017f2a0b8d"} Jan 28 15:17:04 crc kubenswrapper[4981]: I0128 15:17:04.689513 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b6qkp"] Jan 28 15:17:04 crc kubenswrapper[4981]: I0128 15:17:04.691077 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:04 crc kubenswrapper[4981]: I0128 15:17:04.707069 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6qkp"] Jan 28 15:17:04 crc kubenswrapper[4981]: I0128 15:17:04.848348 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb67241d-b66d-4996-965c-8e3e51cb57bf-catalog-content\") pod \"certified-operators-b6qkp\" (UID: \"cb67241d-b66d-4996-965c-8e3e51cb57bf\") " pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:04 crc kubenswrapper[4981]: I0128 15:17:04.848396 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb67241d-b66d-4996-965c-8e3e51cb57bf-utilities\") pod \"certified-operators-b6qkp\" (UID: \"cb67241d-b66d-4996-965c-8e3e51cb57bf\") " pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:04 crc kubenswrapper[4981]: I0128 15:17:04.848424 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9k7s\" (UniqueName: \"kubernetes.io/projected/cb67241d-b66d-4996-965c-8e3e51cb57bf-kube-api-access-m9k7s\") pod \"certified-operators-b6qkp\" (UID: \"cb67241d-b66d-4996-965c-8e3e51cb57bf\") " pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:04 crc kubenswrapper[4981]: I0128 15:17:04.950093 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb67241d-b66d-4996-965c-8e3e51cb57bf-catalog-content\") pod \"certified-operators-b6qkp\" (UID: \"cb67241d-b66d-4996-965c-8e3e51cb57bf\") " pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:04 crc kubenswrapper[4981]: I0128 15:17:04.950147 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb67241d-b66d-4996-965c-8e3e51cb57bf-utilities\") pod \"certified-operators-b6qkp\" (UID: \"cb67241d-b66d-4996-965c-8e3e51cb57bf\") " pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:04 crc kubenswrapper[4981]: I0128 15:17:04.950175 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9k7s\" (UniqueName: \"kubernetes.io/projected/cb67241d-b66d-4996-965c-8e3e51cb57bf-kube-api-access-m9k7s\") pod \"certified-operators-b6qkp\" (UID: \"cb67241d-b66d-4996-965c-8e3e51cb57bf\") " pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:04 crc kubenswrapper[4981]: I0128 15:17:04.950770 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb67241d-b66d-4996-965c-8e3e51cb57bf-utilities\") pod \"certified-operators-b6qkp\" (UID: \"cb67241d-b66d-4996-965c-8e3e51cb57bf\") " pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:04 crc kubenswrapper[4981]: I0128 15:17:04.952068 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb67241d-b66d-4996-965c-8e3e51cb57bf-catalog-content\") pod \"certified-operators-b6qkp\" (UID: \"cb67241d-b66d-4996-965c-8e3e51cb57bf\") " pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:04 crc kubenswrapper[4981]: I0128 15:17:04.974957 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9k7s\" (UniqueName: \"kubernetes.io/projected/cb67241d-b66d-4996-965c-8e3e51cb57bf-kube-api-access-m9k7s\") pod \"certified-operators-b6qkp\" (UID: \"cb67241d-b66d-4996-965c-8e3e51cb57bf\") " pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:05 crc kubenswrapper[4981]: I0128 15:17:05.021846 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:06 crc kubenswrapper[4981]: I0128 15:17:06.364496 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6qkp"] Jan 28 15:17:06 crc kubenswrapper[4981]: W0128 15:17:06.375817 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb67241d_b66d_4996_965c_8e3e51cb57bf.slice/crio-6d52a1dd0c04b2c0bfff168ef2ca2d7435c67b601f4b41e2bc89ffc16ce2750e WatchSource:0}: Error finding container 6d52a1dd0c04b2c0bfff168ef2ca2d7435c67b601f4b41e2bc89ffc16ce2750e: Status 404 returned error can't find the container with id 6d52a1dd0c04b2c0bfff168ef2ca2d7435c67b601f4b41e2bc89ffc16ce2750e Jan 28 15:17:06 crc kubenswrapper[4981]: I0128 15:17:06.973912 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" event={"ID":"0a0d4786-f200-41af-b16c-23528e0537dd","Type":"ContainerStarted","Data":"19b32798a2efbfd2d5012e132f588a7d2bef50952b6b4e420cde73745841059e"} Jan 28 15:17:06 crc kubenswrapper[4981]: I0128 15:17:06.975842 4981 generic.go:334] "Generic (PLEG): container finished" podID="cb67241d-b66d-4996-965c-8e3e51cb57bf" containerID="c3dfa8f40fa8f651815e0aaace69eec6fd76f2712d18769c73883d35ec69e0eb" exitCode=0 Jan 28 15:17:06 crc kubenswrapper[4981]: I0128 15:17:06.975933 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6qkp" event={"ID":"cb67241d-b66d-4996-965c-8e3e51cb57bf","Type":"ContainerDied","Data":"c3dfa8f40fa8f651815e0aaace69eec6fd76f2712d18769c73883d35ec69e0eb"} Jan 28 15:17:06 crc kubenswrapper[4981]: I0128 15:17:06.975987 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6qkp" event={"ID":"cb67241d-b66d-4996-965c-8e3e51cb57bf","Type":"ContainerStarted","Data":"6d52a1dd0c04b2c0bfff168ef2ca2d7435c67b601f4b41e2bc89ffc16ce2750e"} Jan 28 15:17:06 crc kubenswrapper[4981]: I0128 15:17:06.983292 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" event={"ID":"bd39e6ba-068e-4ce1-936b-15b3c003cd04","Type":"ContainerStarted","Data":"10414099d18234801d0707a6a0660d07c9b9e333b722e93305404e618388edb3"} Jan 28 15:17:06 crc kubenswrapper[4981]: I0128 15:17:06.983548 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" Jan 28 15:17:07 crc kubenswrapper[4981]: I0128 15:17:07.010372 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" podStartSLOduration=2.353996487 podStartE2EDuration="7.010345092s" podCreationTimestamp="2026-01-28 15:17:00 +0000 UTC" firstStartedPulling="2026-01-28 15:17:01.267719095 +0000 UTC m=+832.719877336" lastFinishedPulling="2026-01-28 15:17:05.9240677 +0000 UTC m=+837.376225941" observedRunningTime="2026-01-28 15:17:07.002216771 +0000 UTC m=+838.454375042" watchObservedRunningTime="2026-01-28 15:17:07.010345092 +0000 UTC m=+838.462503363" Jan 28 15:17:07 crc kubenswrapper[4981]: I0128 15:17:07.034434 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" podStartSLOduration=2.576093576 podStartE2EDuration="7.034408238s" podCreationTimestamp="2026-01-28 15:17:00 +0000 UTC" firstStartedPulling="2026-01-28 15:17:01.488838816 +0000 UTC m=+832.940997057" lastFinishedPulling="2026-01-28 15:17:05.947153478 +0000 UTC m=+837.399311719" observedRunningTime="2026-01-28 15:17:07.030382148 +0000 UTC m=+838.482540429" watchObservedRunningTime="2026-01-28 15:17:07.034408238 +0000 UTC m=+838.486566519" Jan 28 15:17:07 crc kubenswrapper[4981]: I0128 15:17:07.988377 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" Jan 28 15:17:08 crc kubenswrapper[4981]: I0128 15:17:08.998047 4981 generic.go:334] "Generic (PLEG): container finished" podID="cb67241d-b66d-4996-965c-8e3e51cb57bf" containerID="1f9ba723501fa432e5c9a07e1afe139de232dbb41222244263cbc4f82f9bd4df" exitCode=0 Jan 28 15:17:08 crc kubenswrapper[4981]: I0128 15:17:08.998112 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6qkp" event={"ID":"cb67241d-b66d-4996-965c-8e3e51cb57bf","Type":"ContainerDied","Data":"1f9ba723501fa432e5c9a07e1afe139de232dbb41222244263cbc4f82f9bd4df"} Jan 28 15:17:10 crc kubenswrapper[4981]: I0128 15:17:10.014651 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6qkp" event={"ID":"cb67241d-b66d-4996-965c-8e3e51cb57bf","Type":"ContainerStarted","Data":"60a0814427e7fc37e962c94c5805c1680f2710240f8f68be491ec539ce2d76f2"} Jan 28 15:17:10 crc kubenswrapper[4981]: I0128 15:17:10.041943 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b6qkp" podStartSLOduration=3.633943495 podStartE2EDuration="6.041927111s" podCreationTimestamp="2026-01-28 15:17:04 +0000 UTC" firstStartedPulling="2026-01-28 15:17:06.9783 +0000 UTC m=+838.430458271" lastFinishedPulling="2026-01-28 15:17:09.386283636 +0000 UTC m=+840.838441887" observedRunningTime="2026-01-28 15:17:10.03749777 +0000 UTC m=+841.489656021" watchObservedRunningTime="2026-01-28 15:17:10.041927111 +0000 UTC m=+841.494085362" Jan 28 15:17:15 crc kubenswrapper[4981]: I0128 15:17:15.022753 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:15 crc kubenswrapper[4981]: I0128 15:17:15.023637 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:15 crc kubenswrapper[4981]: I0128 15:17:15.068859 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:15 crc kubenswrapper[4981]: I0128 15:17:15.118975 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:17 crc kubenswrapper[4981]: I0128 15:17:17.393922 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6qkp"] Jan 28 15:17:17 crc kubenswrapper[4981]: I0128 15:17:17.394582 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b6qkp" podUID="cb67241d-b66d-4996-965c-8e3e51cb57bf" containerName="registry-server" containerID="cri-o://60a0814427e7fc37e962c94c5805c1680f2710240f8f68be491ec539ce2d76f2" gracePeriod=2 Jan 28 15:17:17 crc kubenswrapper[4981]: I0128 15:17:17.831478 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:17 crc kubenswrapper[4981]: I0128 15:17:17.921925 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9k7s\" (UniqueName: \"kubernetes.io/projected/cb67241d-b66d-4996-965c-8e3e51cb57bf-kube-api-access-m9k7s\") pod \"cb67241d-b66d-4996-965c-8e3e51cb57bf\" (UID: \"cb67241d-b66d-4996-965c-8e3e51cb57bf\") " Jan 28 15:17:17 crc kubenswrapper[4981]: I0128 15:17:17.922318 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb67241d-b66d-4996-965c-8e3e51cb57bf-utilities\") pod \"cb67241d-b66d-4996-965c-8e3e51cb57bf\" (UID: \"cb67241d-b66d-4996-965c-8e3e51cb57bf\") " Jan 28 15:17:17 crc kubenswrapper[4981]: I0128 15:17:17.922354 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb67241d-b66d-4996-965c-8e3e51cb57bf-catalog-content\") pod \"cb67241d-b66d-4996-965c-8e3e51cb57bf\" (UID: \"cb67241d-b66d-4996-965c-8e3e51cb57bf\") " Jan 28 15:17:17 crc kubenswrapper[4981]: I0128 15:17:17.923112 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb67241d-b66d-4996-965c-8e3e51cb57bf-utilities" (OuterVolumeSpecName: "utilities") pod "cb67241d-b66d-4996-965c-8e3e51cb57bf" (UID: "cb67241d-b66d-4996-965c-8e3e51cb57bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:17 crc kubenswrapper[4981]: I0128 15:17:17.929097 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb67241d-b66d-4996-965c-8e3e51cb57bf-kube-api-access-m9k7s" (OuterVolumeSpecName: "kube-api-access-m9k7s") pod "cb67241d-b66d-4996-965c-8e3e51cb57bf" (UID: "cb67241d-b66d-4996-965c-8e3e51cb57bf"). InnerVolumeSpecName "kube-api-access-m9k7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:17:17 crc kubenswrapper[4981]: I0128 15:17:17.972568 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb67241d-b66d-4996-965c-8e3e51cb57bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb67241d-b66d-4996-965c-8e3e51cb57bf" (UID: "cb67241d-b66d-4996-965c-8e3e51cb57bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.024225 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb67241d-b66d-4996-965c-8e3e51cb57bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.024314 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb67241d-b66d-4996-965c-8e3e51cb57bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.024337 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9k7s\" (UniqueName: \"kubernetes.io/projected/cb67241d-b66d-4996-965c-8e3e51cb57bf-kube-api-access-m9k7s\") on node \"crc\" DevicePath \"\"" Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.079865 4981 generic.go:334] "Generic (PLEG): container finished" podID="cb67241d-b66d-4996-965c-8e3e51cb57bf" containerID="60a0814427e7fc37e962c94c5805c1680f2710240f8f68be491ec539ce2d76f2" exitCode=0 Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.079913 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6qkp" event={"ID":"cb67241d-b66d-4996-965c-8e3e51cb57bf","Type":"ContainerDied","Data":"60a0814427e7fc37e962c94c5805c1680f2710240f8f68be491ec539ce2d76f2"} Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.079918 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6qkp" Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.079945 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6qkp" event={"ID":"cb67241d-b66d-4996-965c-8e3e51cb57bf","Type":"ContainerDied","Data":"6d52a1dd0c04b2c0bfff168ef2ca2d7435c67b601f4b41e2bc89ffc16ce2750e"} Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.079968 4981 scope.go:117] "RemoveContainer" containerID="60a0814427e7fc37e962c94c5805c1680f2710240f8f68be491ec539ce2d76f2" Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.109096 4981 scope.go:117] "RemoveContainer" containerID="1f9ba723501fa432e5c9a07e1afe139de232dbb41222244263cbc4f82f9bd4df" Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.112239 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6qkp"] Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.116502 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b6qkp"] Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.125423 4981 scope.go:117] "RemoveContainer" containerID="c3dfa8f40fa8f651815e0aaace69eec6fd76f2712d18769c73883d35ec69e0eb" Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.150491 4981 scope.go:117] "RemoveContainer" containerID="60a0814427e7fc37e962c94c5805c1680f2710240f8f68be491ec539ce2d76f2" Jan 28 15:17:18 crc kubenswrapper[4981]: E0128 15:17:18.151049 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60a0814427e7fc37e962c94c5805c1680f2710240f8f68be491ec539ce2d76f2\": container with ID starting with 60a0814427e7fc37e962c94c5805c1680f2710240f8f68be491ec539ce2d76f2 not found: ID does not exist" containerID="60a0814427e7fc37e962c94c5805c1680f2710240f8f68be491ec539ce2d76f2" Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.151112 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60a0814427e7fc37e962c94c5805c1680f2710240f8f68be491ec539ce2d76f2"} err="failed to get container status \"60a0814427e7fc37e962c94c5805c1680f2710240f8f68be491ec539ce2d76f2\": rpc error: code = NotFound desc = could not find container \"60a0814427e7fc37e962c94c5805c1680f2710240f8f68be491ec539ce2d76f2\": container with ID starting with 60a0814427e7fc37e962c94c5805c1680f2710240f8f68be491ec539ce2d76f2 not found: ID does not exist" Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.151148 4981 scope.go:117] "RemoveContainer" containerID="1f9ba723501fa432e5c9a07e1afe139de232dbb41222244263cbc4f82f9bd4df" Jan 28 15:17:18 crc kubenswrapper[4981]: E0128 15:17:18.151569 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f9ba723501fa432e5c9a07e1afe139de232dbb41222244263cbc4f82f9bd4df\": container with ID starting with 1f9ba723501fa432e5c9a07e1afe139de232dbb41222244263cbc4f82f9bd4df not found: ID does not exist" containerID="1f9ba723501fa432e5c9a07e1afe139de232dbb41222244263cbc4f82f9bd4df" Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.151594 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f9ba723501fa432e5c9a07e1afe139de232dbb41222244263cbc4f82f9bd4df"} err="failed to get container status \"1f9ba723501fa432e5c9a07e1afe139de232dbb41222244263cbc4f82f9bd4df\": rpc error: code = NotFound desc = could not find container \"1f9ba723501fa432e5c9a07e1afe139de232dbb41222244263cbc4f82f9bd4df\": container with ID starting with 1f9ba723501fa432e5c9a07e1afe139de232dbb41222244263cbc4f82f9bd4df not found: ID does not exist" Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.151609 4981 scope.go:117] "RemoveContainer" containerID="c3dfa8f40fa8f651815e0aaace69eec6fd76f2712d18769c73883d35ec69e0eb" Jan 28 15:17:18 crc kubenswrapper[4981]: E0128 15:17:18.152476 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3dfa8f40fa8f651815e0aaace69eec6fd76f2712d18769c73883d35ec69e0eb\": container with ID starting with c3dfa8f40fa8f651815e0aaace69eec6fd76f2712d18769c73883d35ec69e0eb not found: ID does not exist" containerID="c3dfa8f40fa8f651815e0aaace69eec6fd76f2712d18769c73883d35ec69e0eb" Jan 28 15:17:18 crc kubenswrapper[4981]: I0128 15:17:18.152532 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3dfa8f40fa8f651815e0aaace69eec6fd76f2712d18769c73883d35ec69e0eb"} err="failed to get container status \"c3dfa8f40fa8f651815e0aaace69eec6fd76f2712d18769c73883d35ec69e0eb\": rpc error: code = NotFound desc = could not find container \"c3dfa8f40fa8f651815e0aaace69eec6fd76f2712d18769c73883d35ec69e0eb\": container with ID starting with c3dfa8f40fa8f651815e0aaace69eec6fd76f2712d18769c73883d35ec69e0eb not found: ID does not exist" Jan 28 15:17:19 crc kubenswrapper[4981]: I0128 15:17:19.330494 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb67241d-b66d-4996-965c-8e3e51cb57bf" path="/var/lib/kubelet/pods/cb67241d-b66d-4996-965c-8e3e51cb57bf/volumes" Jan 28 15:17:19 crc kubenswrapper[4981]: I0128 15:17:19.897433 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:17:19 crc kubenswrapper[4981]: I0128 15:17:19.897540 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:17:19 crc kubenswrapper[4981]: I0128 15:17:19.897622 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:17:19 crc kubenswrapper[4981]: I0128 15:17:19.898721 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c69a7071dbf3ec3f1115d8a9515e0de8b513ecd90cb4130db9534e4ea3ba8dac"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:17:19 crc kubenswrapper[4981]: I0128 15:17:19.898854 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://c69a7071dbf3ec3f1115d8a9515e0de8b513ecd90cb4130db9534e4ea3ba8dac" gracePeriod=600 Jan 28 15:17:20 crc kubenswrapper[4981]: I0128 15:17:20.103313 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="c69a7071dbf3ec3f1115d8a9515e0de8b513ecd90cb4130db9534e4ea3ba8dac" exitCode=0 Jan 28 15:17:20 crc kubenswrapper[4981]: I0128 15:17:20.103395 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"c69a7071dbf3ec3f1115d8a9515e0de8b513ecd90cb4130db9534e4ea3ba8dac"} Jan 28 15:17:20 crc kubenswrapper[4981]: I0128 15:17:20.103780 4981 scope.go:117] "RemoveContainer" containerID="d03207bd7d360434e69cdf83589709537b56f9611d1c6a12671a9b7de643ea90" Jan 28 15:17:21 crc kubenswrapper[4981]: I0128 15:17:21.128997 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"176dd31ff4b98ab75c0fb5c532e4cb21dde081ab7085a97e6c5485cd5bc31437"} Jan 28 15:17:21 crc kubenswrapper[4981]: I0128 15:17:21.237074 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-59465cf79b-kmxjc" Jan 28 15:17:40 crc kubenswrapper[4981]: I0128 15:17:40.744743 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-599f895949-pcz4b" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.455701 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx"] Jan 28 15:17:41 crc kubenswrapper[4981]: E0128 15:17:41.456794 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb67241d-b66d-4996-965c-8e3e51cb57bf" containerName="extract-content" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.456817 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb67241d-b66d-4996-965c-8e3e51cb57bf" containerName="extract-content" Jan 28 15:17:41 crc kubenswrapper[4981]: E0128 15:17:41.456827 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb67241d-b66d-4996-965c-8e3e51cb57bf" containerName="extract-utilities" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.456835 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb67241d-b66d-4996-965c-8e3e51cb57bf" containerName="extract-utilities" Jan 28 15:17:41 crc kubenswrapper[4981]: E0128 15:17:41.456851 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb67241d-b66d-4996-965c-8e3e51cb57bf" containerName="registry-server" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.456859 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb67241d-b66d-4996-965c-8e3e51cb57bf" containerName="registry-server" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.456996 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb67241d-b66d-4996-965c-8e3e51cb57bf" containerName="registry-server" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.457490 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.460939 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-l2z7g" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.461175 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.461975 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hvd88"] Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.464085 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.465235 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.467058 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.483006 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx"] Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.542889 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-86r4q"] Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.543735 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-86r4q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.546329 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-qvbhh" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.546601 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.546705 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.546804 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.560287 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-rxw2q"] Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.561370 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-rxw2q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.564676 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.572101 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2f8b9f36-0910-4437-b804-d62c58740667-frr-startup\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.572148 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2f8b9f36-0910-4437-b804-d62c58740667-metrics-certs\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.572181 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fnlq\" (UniqueName: \"kubernetes.io/projected/2f8b9f36-0910-4437-b804-d62c58740667-kube-api-access-5fnlq\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.572215 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4bd7\" (UniqueName: \"kubernetes.io/projected/c0df8723-60c7-4731-8420-e3279d5f1fce-kube-api-access-z4bd7\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksslx\" (UID: \"c0df8723-60c7-4731-8420-e3279d5f1fce\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.572379 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2f8b9f36-0910-4437-b804-d62c58740667-metrics\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.572521 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0df8723-60c7-4731-8420-e3279d5f1fce-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksslx\" (UID: \"c0df8723-60c7-4731-8420-e3279d5f1fce\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.572585 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2f8b9f36-0910-4437-b804-d62c58740667-frr-conf\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.572613 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2f8b9f36-0910-4437-b804-d62c58740667-reloader\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.572714 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2f8b9f36-0910-4437-b804-d62c58740667-frr-sockets\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.580513 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-rxw2q"] Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674252 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de4f8121-43f6-4041-873d-2c13aca10ed9-metrics-certs\") pod \"controller-6968d8fdc4-rxw2q\" (UID: \"de4f8121-43f6-4041-873d-2c13aca10ed9\") " pod="metallb-system/controller-6968d8fdc4-rxw2q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674294 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2f8b9f36-0910-4437-b804-d62c58740667-frr-startup\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674386 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2f8b9f36-0910-4437-b804-d62c58740667-metrics-certs\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674436 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/50c5243a-5761-47df-b450-770a6522770c-memberlist\") pod \"speaker-86r4q\" (UID: \"50c5243a-5761-47df-b450-770a6522770c\") " pod="metallb-system/speaker-86r4q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674467 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50c5243a-5761-47df-b450-770a6522770c-metrics-certs\") pod \"speaker-86r4q\" (UID: \"50c5243a-5761-47df-b450-770a6522770c\") " pod="metallb-system/speaker-86r4q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674508 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fnlq\" (UniqueName: \"kubernetes.io/projected/2f8b9f36-0910-4437-b804-d62c58740667-kube-api-access-5fnlq\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674530 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4bd7\" (UniqueName: \"kubernetes.io/projected/c0df8723-60c7-4731-8420-e3279d5f1fce-kube-api-access-z4bd7\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksslx\" (UID: \"c0df8723-60c7-4731-8420-e3279d5f1fce\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx" Jan 28 15:17:41 crc kubenswrapper[4981]: E0128 15:17:41.674560 4981 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 28 15:17:41 crc kubenswrapper[4981]: E0128 15:17:41.674630 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2f8b9f36-0910-4437-b804-d62c58740667-metrics-certs podName:2f8b9f36-0910-4437-b804-d62c58740667 nodeName:}" failed. No retries permitted until 2026-01-28 15:17:42.174607633 +0000 UTC m=+873.626765874 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2f8b9f36-0910-4437-b804-d62c58740667-metrics-certs") pod "frr-k8s-hvd88" (UID: "2f8b9f36-0910-4437-b804-d62c58740667") : secret "frr-k8s-certs-secret" not found Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674642 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2f8b9f36-0910-4437-b804-d62c58740667-metrics\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674684 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ddkr\" (UniqueName: \"kubernetes.io/projected/de4f8121-43f6-4041-873d-2c13aca10ed9-kube-api-access-4ddkr\") pod \"controller-6968d8fdc4-rxw2q\" (UID: \"de4f8121-43f6-4041-873d-2c13aca10ed9\") " pod="metallb-system/controller-6968d8fdc4-rxw2q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674715 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvxn5\" (UniqueName: \"kubernetes.io/projected/50c5243a-5761-47df-b450-770a6522770c-kube-api-access-dvxn5\") pod \"speaker-86r4q\" (UID: \"50c5243a-5761-47df-b450-770a6522770c\") " pod="metallb-system/speaker-86r4q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674770 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0df8723-60c7-4731-8420-e3279d5f1fce-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksslx\" (UID: \"c0df8723-60c7-4731-8420-e3279d5f1fce\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674832 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2f8b9f36-0910-4437-b804-d62c58740667-frr-conf\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674872 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/de4f8121-43f6-4041-873d-2c13aca10ed9-cert\") pod \"controller-6968d8fdc4-rxw2q\" (UID: \"de4f8121-43f6-4041-873d-2c13aca10ed9\") " pod="metallb-system/controller-6968d8fdc4-rxw2q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674904 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2f8b9f36-0910-4437-b804-d62c58740667-reloader\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674928 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2f8b9f36-0910-4437-b804-d62c58740667-frr-sockets\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.674955 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/50c5243a-5761-47df-b450-770a6522770c-metallb-excludel2\") pod \"speaker-86r4q\" (UID: \"50c5243a-5761-47df-b450-770a6522770c\") " pod="metallb-system/speaker-86r4q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.675134 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2f8b9f36-0910-4437-b804-d62c58740667-frr-startup\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.675216 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2f8b9f36-0910-4437-b804-d62c58740667-frr-conf\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.675299 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2f8b9f36-0910-4437-b804-d62c58740667-metrics\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.675464 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2f8b9f36-0910-4437-b804-d62c58740667-reloader\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.675701 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2f8b9f36-0910-4437-b804-d62c58740667-frr-sockets\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.680642 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0df8723-60c7-4731-8420-e3279d5f1fce-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksslx\" (UID: \"c0df8723-60c7-4731-8420-e3279d5f1fce\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.694809 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fnlq\" (UniqueName: \"kubernetes.io/projected/2f8b9f36-0910-4437-b804-d62c58740667-kube-api-access-5fnlq\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.695725 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4bd7\" (UniqueName: \"kubernetes.io/projected/c0df8723-60c7-4731-8420-e3279d5f1fce-kube-api-access-z4bd7\") pod \"frr-k8s-webhook-server-7df86c4f6c-ksslx\" (UID: \"c0df8723-60c7-4731-8420-e3279d5f1fce\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.776174 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/de4f8121-43f6-4041-873d-2c13aca10ed9-cert\") pod \"controller-6968d8fdc4-rxw2q\" (UID: \"de4f8121-43f6-4041-873d-2c13aca10ed9\") " pod="metallb-system/controller-6968d8fdc4-rxw2q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.776251 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/50c5243a-5761-47df-b450-770a6522770c-metallb-excludel2\") pod \"speaker-86r4q\" (UID: \"50c5243a-5761-47df-b450-770a6522770c\") " pod="metallb-system/speaker-86r4q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.776293 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de4f8121-43f6-4041-873d-2c13aca10ed9-metrics-certs\") pod \"controller-6968d8fdc4-rxw2q\" (UID: \"de4f8121-43f6-4041-873d-2c13aca10ed9\") " pod="metallb-system/controller-6968d8fdc4-rxw2q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.776338 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/50c5243a-5761-47df-b450-770a6522770c-memberlist\") pod \"speaker-86r4q\" (UID: \"50c5243a-5761-47df-b450-770a6522770c\") " pod="metallb-system/speaker-86r4q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.776363 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50c5243a-5761-47df-b450-770a6522770c-metrics-certs\") pod \"speaker-86r4q\" (UID: \"50c5243a-5761-47df-b450-770a6522770c\") " pod="metallb-system/speaker-86r4q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.776417 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ddkr\" (UniqueName: \"kubernetes.io/projected/de4f8121-43f6-4041-873d-2c13aca10ed9-kube-api-access-4ddkr\") pod \"controller-6968d8fdc4-rxw2q\" (UID: \"de4f8121-43f6-4041-873d-2c13aca10ed9\") " pod="metallb-system/controller-6968d8fdc4-rxw2q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.776443 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvxn5\" (UniqueName: \"kubernetes.io/projected/50c5243a-5761-47df-b450-770a6522770c-kube-api-access-dvxn5\") pod \"speaker-86r4q\" (UID: \"50c5243a-5761-47df-b450-770a6522770c\") " pod="metallb-system/speaker-86r4q" Jan 28 15:17:41 crc kubenswrapper[4981]: E0128 15:17:41.776555 4981 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 15:17:41 crc kubenswrapper[4981]: E0128 15:17:41.776618 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50c5243a-5761-47df-b450-770a6522770c-memberlist podName:50c5243a-5761-47df-b450-770a6522770c nodeName:}" failed. No retries permitted until 2026-01-28 15:17:42.276600983 +0000 UTC m=+873.728759224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/50c5243a-5761-47df-b450-770a6522770c-memberlist") pod "speaker-86r4q" (UID: "50c5243a-5761-47df-b450-770a6522770c") : secret "metallb-memberlist" not found Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.777470 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/50c5243a-5761-47df-b450-770a6522770c-metallb-excludel2\") pod \"speaker-86r4q\" (UID: \"50c5243a-5761-47df-b450-770a6522770c\") " pod="metallb-system/speaker-86r4q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.777774 4981 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.779357 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.779853 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/50c5243a-5761-47df-b450-770a6522770c-metrics-certs\") pod \"speaker-86r4q\" (UID: \"50c5243a-5761-47df-b450-770a6522770c\") " pod="metallb-system/speaker-86r4q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.779926 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/de4f8121-43f6-4041-873d-2c13aca10ed9-metrics-certs\") pod \"controller-6968d8fdc4-rxw2q\" (UID: \"de4f8121-43f6-4041-873d-2c13aca10ed9\") " pod="metallb-system/controller-6968d8fdc4-rxw2q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.789656 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/de4f8121-43f6-4041-873d-2c13aca10ed9-cert\") pod \"controller-6968d8fdc4-rxw2q\" (UID: \"de4f8121-43f6-4041-873d-2c13aca10ed9\") " pod="metallb-system/controller-6968d8fdc4-rxw2q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.791950 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvxn5\" (UniqueName: \"kubernetes.io/projected/50c5243a-5761-47df-b450-770a6522770c-kube-api-access-dvxn5\") pod \"speaker-86r4q\" (UID: \"50c5243a-5761-47df-b450-770a6522770c\") " pod="metallb-system/speaker-86r4q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.801952 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ddkr\" (UniqueName: \"kubernetes.io/projected/de4f8121-43f6-4041-873d-2c13aca10ed9-kube-api-access-4ddkr\") pod \"controller-6968d8fdc4-rxw2q\" (UID: \"de4f8121-43f6-4041-873d-2c13aca10ed9\") " pod="metallb-system/controller-6968d8fdc4-rxw2q" Jan 28 15:17:41 crc kubenswrapper[4981]: I0128 15:17:41.878593 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-rxw2q" Jan 28 15:17:42 crc kubenswrapper[4981]: I0128 15:17:42.014408 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx"] Jan 28 15:17:42 crc kubenswrapper[4981]: I0128 15:17:42.187491 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2f8b9f36-0910-4437-b804-d62c58740667-metrics-certs\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:42 crc kubenswrapper[4981]: I0128 15:17:42.191548 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2f8b9f36-0910-4437-b804-d62c58740667-metrics-certs\") pod \"frr-k8s-hvd88\" (UID: \"2f8b9f36-0910-4437-b804-d62c58740667\") " pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:42 crc kubenswrapper[4981]: I0128 15:17:42.288251 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/50c5243a-5761-47df-b450-770a6522770c-memberlist\") pod \"speaker-86r4q\" (UID: \"50c5243a-5761-47df-b450-770a6522770c\") " pod="metallb-system/speaker-86r4q" Jan 28 15:17:42 crc kubenswrapper[4981]: E0128 15:17:42.288453 4981 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 15:17:42 crc kubenswrapper[4981]: E0128 15:17:42.288543 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50c5243a-5761-47df-b450-770a6522770c-memberlist podName:50c5243a-5761-47df-b450-770a6522770c nodeName:}" failed. No retries permitted until 2026-01-28 15:17:43.288507903 +0000 UTC m=+874.740666144 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/50c5243a-5761-47df-b450-770a6522770c-memberlist") pod "speaker-86r4q" (UID: "50c5243a-5761-47df-b450-770a6522770c") : secret "metallb-memberlist" not found Jan 28 15:17:42 crc kubenswrapper[4981]: I0128 15:17:42.294084 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx" event={"ID":"c0df8723-60c7-4731-8420-e3279d5f1fce","Type":"ContainerStarted","Data":"e582518e2a0d796a4fdfff27aeef072235a4a20a39461b5fb602e2ae39b79df7"} Jan 28 15:17:42 crc kubenswrapper[4981]: I0128 15:17:42.351607 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-rxw2q"] Jan 28 15:17:42 crc kubenswrapper[4981]: W0128 15:17:42.366204 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde4f8121_43f6_4041_873d_2c13aca10ed9.slice/crio-94f3a22f48ff926999ff067fe14f5ad6aea6c78ff1750db3738732f8cdb739c8 WatchSource:0}: Error finding container 94f3a22f48ff926999ff067fe14f5ad6aea6c78ff1750db3738732f8cdb739c8: Status 404 returned error can't find the container with id 94f3a22f48ff926999ff067fe14f5ad6aea6c78ff1750db3738732f8cdb739c8 Jan 28 15:17:42 crc kubenswrapper[4981]: I0128 15:17:42.393897 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:43 crc kubenswrapper[4981]: I0128 15:17:43.302077 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-rxw2q" event={"ID":"de4f8121-43f6-4041-873d-2c13aca10ed9","Type":"ContainerStarted","Data":"915893ebd1c2b08ce4806e81a86ade602e8aba07dfd774a1d34330107a59796b"} Jan 28 15:17:43 crc kubenswrapper[4981]: I0128 15:17:43.302461 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-rxw2q" event={"ID":"de4f8121-43f6-4041-873d-2c13aca10ed9","Type":"ContainerStarted","Data":"bdd977508cc4ae549de2589c80f1eb24de180779e2148e676e70a8bb12c78e01"} Jan 28 15:17:43 crc kubenswrapper[4981]: I0128 15:17:43.302478 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-rxw2q" event={"ID":"de4f8121-43f6-4041-873d-2c13aca10ed9","Type":"ContainerStarted","Data":"94f3a22f48ff926999ff067fe14f5ad6aea6c78ff1750db3738732f8cdb739c8"} Jan 28 15:17:43 crc kubenswrapper[4981]: I0128 15:17:43.302498 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-rxw2q" Jan 28 15:17:43 crc kubenswrapper[4981]: I0128 15:17:43.304208 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/50c5243a-5761-47df-b450-770a6522770c-memberlist\") pod \"speaker-86r4q\" (UID: \"50c5243a-5761-47df-b450-770a6522770c\") " pod="metallb-system/speaker-86r4q" Jan 28 15:17:43 crc kubenswrapper[4981]: I0128 15:17:43.307442 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvd88" event={"ID":"2f8b9f36-0910-4437-b804-d62c58740667","Type":"ContainerStarted","Data":"f9f26364954c2e974d4ad6544d93daf6a0b1316a1227a0154c0014e3efe5458d"} Jan 28 15:17:43 crc kubenswrapper[4981]: I0128 15:17:43.308888 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/50c5243a-5761-47df-b450-770a6522770c-memberlist\") pod \"speaker-86r4q\" (UID: \"50c5243a-5761-47df-b450-770a6522770c\") " pod="metallb-system/speaker-86r4q" Jan 28 15:17:43 crc kubenswrapper[4981]: I0128 15:17:43.338526 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-rxw2q" podStartSLOduration=2.338509736 podStartE2EDuration="2.338509736s" podCreationTimestamp="2026-01-28 15:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:17:43.337961962 +0000 UTC m=+874.790120203" watchObservedRunningTime="2026-01-28 15:17:43.338509736 +0000 UTC m=+874.790667977" Jan 28 15:17:43 crc kubenswrapper[4981]: I0128 15:17:43.358419 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-86r4q" Jan 28 15:17:44 crc kubenswrapper[4981]: I0128 15:17:44.332984 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-86r4q" event={"ID":"50c5243a-5761-47df-b450-770a6522770c","Type":"ContainerStarted","Data":"9485ca74891c607e2f9187d77d7ab0f7e87f92f9a2428bb31705394488e06d70"} Jan 28 15:17:44 crc kubenswrapper[4981]: I0128 15:17:44.333529 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-86r4q" event={"ID":"50c5243a-5761-47df-b450-770a6522770c","Type":"ContainerStarted","Data":"c4208bd2a0799a5de4df697f4708fb89acc371f1a9a24acee8f91e2e1a246a62"} Jan 28 15:17:44 crc kubenswrapper[4981]: I0128 15:17:44.333547 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-86r4q" event={"ID":"50c5243a-5761-47df-b450-770a6522770c","Type":"ContainerStarted","Data":"959e20a81b72d5a59126d77d15e15f786b9ce16881183599abc39ecfd9a19ac9"} Jan 28 15:17:44 crc kubenswrapper[4981]: I0128 15:17:44.333817 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-86r4q" Jan 28 15:17:44 crc kubenswrapper[4981]: I0128 15:17:44.364333 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-86r4q" podStartSLOduration=3.364307859 podStartE2EDuration="3.364307859s" podCreationTimestamp="2026-01-28 15:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:17:44.356805201 +0000 UTC m=+875.808963452" watchObservedRunningTime="2026-01-28 15:17:44.364307859 +0000 UTC m=+875.816466110" Jan 28 15:17:51 crc kubenswrapper[4981]: I0128 15:17:51.380136 4981 generic.go:334] "Generic (PLEG): container finished" podID="2f8b9f36-0910-4437-b804-d62c58740667" containerID="7df01ecfdf228c44feb831267d2e256c1651f34f857ccd27af5f81d0e5f783b7" exitCode=0 Jan 28 15:17:51 crc kubenswrapper[4981]: I0128 15:17:51.380242 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvd88" event={"ID":"2f8b9f36-0910-4437-b804-d62c58740667","Type":"ContainerDied","Data":"7df01ecfdf228c44feb831267d2e256c1651f34f857ccd27af5f81d0e5f783b7"} Jan 28 15:17:51 crc kubenswrapper[4981]: I0128 15:17:51.391727 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx" event={"ID":"c0df8723-60c7-4731-8420-e3279d5f1fce","Type":"ContainerStarted","Data":"a5da816819286667f0bbccb11a5283e0096eb7bcd2c1204f7dec81609ca2d28c"} Jan 28 15:17:51 crc kubenswrapper[4981]: I0128 15:17:51.392269 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx" Jan 28 15:17:51 crc kubenswrapper[4981]: I0128 15:17:51.439160 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx" podStartSLOduration=2.070315112 podStartE2EDuration="10.439081976s" podCreationTimestamp="2026-01-28 15:17:41 +0000 UTC" firstStartedPulling="2026-01-28 15:17:42.057397908 +0000 UTC m=+873.509556149" lastFinishedPulling="2026-01-28 15:17:50.426164772 +0000 UTC m=+881.878323013" observedRunningTime="2026-01-28 15:17:51.43697541 +0000 UTC m=+882.889133661" watchObservedRunningTime="2026-01-28 15:17:51.439081976 +0000 UTC m=+882.891240227" Jan 28 15:17:52 crc kubenswrapper[4981]: I0128 15:17:52.400990 4981 generic.go:334] "Generic (PLEG): container finished" podID="2f8b9f36-0910-4437-b804-d62c58740667" containerID="c637af36c2c7b3da0117ca65fbc83e545964535a2390e2ae1fb4c2a6ada21f4f" exitCode=0 Jan 28 15:17:52 crc kubenswrapper[4981]: I0128 15:17:52.401081 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvd88" event={"ID":"2f8b9f36-0910-4437-b804-d62c58740667","Type":"ContainerDied","Data":"c637af36c2c7b3da0117ca65fbc83e545964535a2390e2ae1fb4c2a6ada21f4f"} Jan 28 15:17:53 crc kubenswrapper[4981]: I0128 15:17:53.364997 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-86r4q" Jan 28 15:17:53 crc kubenswrapper[4981]: I0128 15:17:53.414667 4981 generic.go:334] "Generic (PLEG): container finished" podID="2f8b9f36-0910-4437-b804-d62c58740667" containerID="d55e279f76a642ed5762a7a4d2d29f7ceb5bc7e48d0e95d23e3cf9e074374703" exitCode=0 Jan 28 15:17:53 crc kubenswrapper[4981]: I0128 15:17:53.414753 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvd88" event={"ID":"2f8b9f36-0910-4437-b804-d62c58740667","Type":"ContainerDied","Data":"d55e279f76a642ed5762a7a4d2d29f7ceb5bc7e48d0e95d23e3cf9e074374703"} Jan 28 15:17:54 crc kubenswrapper[4981]: I0128 15:17:54.425110 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvd88" event={"ID":"2f8b9f36-0910-4437-b804-d62c58740667","Type":"ContainerStarted","Data":"9ab4bad96cc21ea58d5c2b210ff2b02522a68acb7685a2997177c5ffaab71496"} Jan 28 15:17:54 crc kubenswrapper[4981]: I0128 15:17:54.425443 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvd88" event={"ID":"2f8b9f36-0910-4437-b804-d62c58740667","Type":"ContainerStarted","Data":"56cb8cc8fb5721eef1900319676b456e8db510aa2f8c39f44a0f0660767e274b"} Jan 28 15:17:54 crc kubenswrapper[4981]: I0128 15:17:54.425458 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvd88" event={"ID":"2f8b9f36-0910-4437-b804-d62c58740667","Type":"ContainerStarted","Data":"9fe110292c433234e407dfe79b3bb4422ce1cad12a13aa000ff370b33f2b4dcf"} Jan 28 15:17:54 crc kubenswrapper[4981]: I0128 15:17:54.425472 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvd88" event={"ID":"2f8b9f36-0910-4437-b804-d62c58740667","Type":"ContainerStarted","Data":"88cce2f3d29790e84824341867e38a502c4250cd8bd268a7bd18b9d83a8cd877"} Jan 28 15:17:54 crc kubenswrapper[4981]: I0128 15:17:54.425483 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvd88" event={"ID":"2f8b9f36-0910-4437-b804-d62c58740667","Type":"ContainerStarted","Data":"7d391187b21be3e5c7ecc29f70fee34ec11ee8270a73c222f7e57d61b3c7b242"} Jan 28 15:17:55 crc kubenswrapper[4981]: I0128 15:17:55.438453 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hvd88" event={"ID":"2f8b9f36-0910-4437-b804-d62c58740667","Type":"ContainerStarted","Data":"1809bf05ff75795eb9f93366e65f1391882d8f856024b08508789181b40392ec"} Jan 28 15:17:55 crc kubenswrapper[4981]: I0128 15:17:55.439177 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:55 crc kubenswrapper[4981]: I0128 15:17:55.469156 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hvd88" podStartSLOduration=6.5782741829999996 podStartE2EDuration="14.469138673s" podCreationTimestamp="2026-01-28 15:17:41 +0000 UTC" firstStartedPulling="2026-01-28 15:17:42.512904592 +0000 UTC m=+873.965062843" lastFinishedPulling="2026-01-28 15:17:50.403769092 +0000 UTC m=+881.855927333" observedRunningTime="2026-01-28 15:17:55.466259607 +0000 UTC m=+886.918417858" watchObservedRunningTime="2026-01-28 15:17:55.469138673 +0000 UTC m=+886.921296924" Jan 28 15:17:57 crc kubenswrapper[4981]: I0128 15:17:57.394896 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:57 crc kubenswrapper[4981]: I0128 15:17:57.490070 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hvd88" Jan 28 15:17:59 crc kubenswrapper[4981]: I0128 15:17:59.401689 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-69nvl"] Jan 28 15:17:59 crc kubenswrapper[4981]: I0128 15:17:59.403277 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-69nvl" Jan 28 15:17:59 crc kubenswrapper[4981]: I0128 15:17:59.407098 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 28 15:17:59 crc kubenswrapper[4981]: I0128 15:17:59.407170 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 28 15:17:59 crc kubenswrapper[4981]: I0128 15:17:59.411285 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-xf785" Jan 28 15:17:59 crc kubenswrapper[4981]: I0128 15:17:59.427972 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-69nvl"] Jan 28 15:17:59 crc kubenswrapper[4981]: I0128 15:17:59.538753 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpz9v\" (UniqueName: \"kubernetes.io/projected/1787f27a-f5e3-406a-aad4-0f3bb1a944c4-kube-api-access-wpz9v\") pod \"openstack-operator-index-69nvl\" (UID: \"1787f27a-f5e3-406a-aad4-0f3bb1a944c4\") " pod="openstack-operators/openstack-operator-index-69nvl" Jan 28 15:17:59 crc kubenswrapper[4981]: I0128 15:17:59.640538 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpz9v\" (UniqueName: \"kubernetes.io/projected/1787f27a-f5e3-406a-aad4-0f3bb1a944c4-kube-api-access-wpz9v\") pod \"openstack-operator-index-69nvl\" (UID: \"1787f27a-f5e3-406a-aad4-0f3bb1a944c4\") " pod="openstack-operators/openstack-operator-index-69nvl" Jan 28 15:17:59 crc kubenswrapper[4981]: I0128 15:17:59.669520 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpz9v\" (UniqueName: \"kubernetes.io/projected/1787f27a-f5e3-406a-aad4-0f3bb1a944c4-kube-api-access-wpz9v\") pod \"openstack-operator-index-69nvl\" (UID: \"1787f27a-f5e3-406a-aad4-0f3bb1a944c4\") " pod="openstack-operators/openstack-operator-index-69nvl" Jan 28 15:17:59 crc kubenswrapper[4981]: I0128 15:17:59.736818 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-69nvl" Jan 28 15:18:00 crc kubenswrapper[4981]: W0128 15:18:00.251267 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1787f27a_f5e3_406a_aad4_0f3bb1a944c4.slice/crio-ec43c422eb9b22aff9d30b9ad1ea76468e64bbf95508c291f29e3b58afb15035 WatchSource:0}: Error finding container ec43c422eb9b22aff9d30b9ad1ea76468e64bbf95508c291f29e3b58afb15035: Status 404 returned error can't find the container with id ec43c422eb9b22aff9d30b9ad1ea76468e64bbf95508c291f29e3b58afb15035 Jan 28 15:18:00 crc kubenswrapper[4981]: I0128 15:18:00.260617 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-69nvl"] Jan 28 15:18:00 crc kubenswrapper[4981]: I0128 15:18:00.471273 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-69nvl" event={"ID":"1787f27a-f5e3-406a-aad4-0f3bb1a944c4","Type":"ContainerStarted","Data":"ec43c422eb9b22aff9d30b9ad1ea76468e64bbf95508c291f29e3b58afb15035"} Jan 28 15:18:01 crc kubenswrapper[4981]: I0128 15:18:01.784039 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-ksslx" Jan 28 15:18:01 crc kubenswrapper[4981]: I0128 15:18:01.888444 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-rxw2q" Jan 28 15:18:04 crc kubenswrapper[4981]: I0128 15:18:04.496965 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-69nvl" event={"ID":"1787f27a-f5e3-406a-aad4-0f3bb1a944c4","Type":"ContainerStarted","Data":"56ee356e8a3fee674c84487b1a64eda5dfee8813856f1bb0445bd20b9da1b3f4"} Jan 28 15:18:04 crc kubenswrapper[4981]: I0128 15:18:04.525238 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-69nvl" podStartSLOduration=2.14970301 podStartE2EDuration="5.525173994s" podCreationTimestamp="2026-01-28 15:17:59 +0000 UTC" firstStartedPulling="2026-01-28 15:18:00.253530505 +0000 UTC m=+891.705688756" lastFinishedPulling="2026-01-28 15:18:03.629001499 +0000 UTC m=+895.081159740" observedRunningTime="2026-01-28 15:18:04.512537061 +0000 UTC m=+895.964695322" watchObservedRunningTime="2026-01-28 15:18:04.525173994 +0000 UTC m=+895.977332275" Jan 28 15:18:04 crc kubenswrapper[4981]: I0128 15:18:04.590352 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-69nvl"] Jan 28 15:18:05 crc kubenswrapper[4981]: I0128 15:18:05.194463 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cpdk7"] Jan 28 15:18:05 crc kubenswrapper[4981]: I0128 15:18:05.195145 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cpdk7" Jan 28 15:18:05 crc kubenswrapper[4981]: I0128 15:18:05.205098 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cpdk7"] Jan 28 15:18:05 crc kubenswrapper[4981]: I0128 15:18:05.325441 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkc44\" (UniqueName: \"kubernetes.io/projected/1d4745c7-b014-492d-936f-d4c430359df3-kube-api-access-zkc44\") pod \"openstack-operator-index-cpdk7\" (UID: \"1d4745c7-b014-492d-936f-d4c430359df3\") " pod="openstack-operators/openstack-operator-index-cpdk7" Jan 28 15:18:05 crc kubenswrapper[4981]: I0128 15:18:05.427102 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkc44\" (UniqueName: \"kubernetes.io/projected/1d4745c7-b014-492d-936f-d4c430359df3-kube-api-access-zkc44\") pod \"openstack-operator-index-cpdk7\" (UID: \"1d4745c7-b014-492d-936f-d4c430359df3\") " pod="openstack-operators/openstack-operator-index-cpdk7" Jan 28 15:18:05 crc kubenswrapper[4981]: I0128 15:18:05.447365 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkc44\" (UniqueName: \"kubernetes.io/projected/1d4745c7-b014-492d-936f-d4c430359df3-kube-api-access-zkc44\") pod \"openstack-operator-index-cpdk7\" (UID: \"1d4745c7-b014-492d-936f-d4c430359df3\") " pod="openstack-operators/openstack-operator-index-cpdk7" Jan 28 15:18:05 crc kubenswrapper[4981]: I0128 15:18:05.514835 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cpdk7" Jan 28 15:18:05 crc kubenswrapper[4981]: I0128 15:18:05.739967 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cpdk7"] Jan 28 15:18:06 crc kubenswrapper[4981]: I0128 15:18:06.516779 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-69nvl" podUID="1787f27a-f5e3-406a-aad4-0f3bb1a944c4" containerName="registry-server" containerID="cri-o://56ee356e8a3fee674c84487b1a64eda5dfee8813856f1bb0445bd20b9da1b3f4" gracePeriod=2 Jan 28 15:18:06 crc kubenswrapper[4981]: I0128 15:18:06.517377 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cpdk7" event={"ID":"1d4745c7-b014-492d-936f-d4c430359df3","Type":"ContainerStarted","Data":"ace931a1391fafafb7b787e63378a64bb8de5918dcf75ce731f58a7d8552e27d"} Jan 28 15:18:06 crc kubenswrapper[4981]: I0128 15:18:06.517422 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cpdk7" event={"ID":"1d4745c7-b014-492d-936f-d4c430359df3","Type":"ContainerStarted","Data":"1e49b639370f4f71256cde50af00125344f9f77a20cb79ab754e266e27cc0805"} Jan 28 15:18:06 crc kubenswrapper[4981]: I0128 15:18:06.537411 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cpdk7" podStartSLOduration=1.488787341 podStartE2EDuration="1.537390613s" podCreationTimestamp="2026-01-28 15:18:05 +0000 UTC" firstStartedPulling="2026-01-28 15:18:05.759789095 +0000 UTC m=+897.211947336" lastFinishedPulling="2026-01-28 15:18:05.808392367 +0000 UTC m=+897.260550608" observedRunningTime="2026-01-28 15:18:06.533379827 +0000 UTC m=+897.985538078" watchObservedRunningTime="2026-01-28 15:18:06.537390613 +0000 UTC m=+897.989548854" Jan 28 15:18:06 crc kubenswrapper[4981]: I0128 15:18:06.938347 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-69nvl" Jan 28 15:18:07 crc kubenswrapper[4981]: I0128 15:18:07.050344 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpz9v\" (UniqueName: \"kubernetes.io/projected/1787f27a-f5e3-406a-aad4-0f3bb1a944c4-kube-api-access-wpz9v\") pod \"1787f27a-f5e3-406a-aad4-0f3bb1a944c4\" (UID: \"1787f27a-f5e3-406a-aad4-0f3bb1a944c4\") " Jan 28 15:18:07 crc kubenswrapper[4981]: I0128 15:18:07.057278 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1787f27a-f5e3-406a-aad4-0f3bb1a944c4-kube-api-access-wpz9v" (OuterVolumeSpecName: "kube-api-access-wpz9v") pod "1787f27a-f5e3-406a-aad4-0f3bb1a944c4" (UID: "1787f27a-f5e3-406a-aad4-0f3bb1a944c4"). InnerVolumeSpecName "kube-api-access-wpz9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:18:07 crc kubenswrapper[4981]: I0128 15:18:07.152107 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpz9v\" (UniqueName: \"kubernetes.io/projected/1787f27a-f5e3-406a-aad4-0f3bb1a944c4-kube-api-access-wpz9v\") on node \"crc\" DevicePath \"\"" Jan 28 15:18:07 crc kubenswrapper[4981]: I0128 15:18:07.527044 4981 generic.go:334] "Generic (PLEG): container finished" podID="1787f27a-f5e3-406a-aad4-0f3bb1a944c4" containerID="56ee356e8a3fee674c84487b1a64eda5dfee8813856f1bb0445bd20b9da1b3f4" exitCode=0 Jan 28 15:18:07 crc kubenswrapper[4981]: I0128 15:18:07.527131 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-69nvl" event={"ID":"1787f27a-f5e3-406a-aad4-0f3bb1a944c4","Type":"ContainerDied","Data":"56ee356e8a3fee674c84487b1a64eda5dfee8813856f1bb0445bd20b9da1b3f4"} Jan 28 15:18:07 crc kubenswrapper[4981]: I0128 15:18:07.527601 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-69nvl" event={"ID":"1787f27a-f5e3-406a-aad4-0f3bb1a944c4","Type":"ContainerDied","Data":"ec43c422eb9b22aff9d30b9ad1ea76468e64bbf95508c291f29e3b58afb15035"} Jan 28 15:18:07 crc kubenswrapper[4981]: I0128 15:18:07.527211 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-69nvl" Jan 28 15:18:07 crc kubenswrapper[4981]: I0128 15:18:07.527648 4981 scope.go:117] "RemoveContainer" containerID="56ee356e8a3fee674c84487b1a64eda5dfee8813856f1bb0445bd20b9da1b3f4" Jan 28 15:18:07 crc kubenswrapper[4981]: I0128 15:18:07.551957 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-69nvl"] Jan 28 15:18:07 crc kubenswrapper[4981]: I0128 15:18:07.556628 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-69nvl"] Jan 28 15:18:07 crc kubenswrapper[4981]: I0128 15:18:07.559302 4981 scope.go:117] "RemoveContainer" containerID="56ee356e8a3fee674c84487b1a64eda5dfee8813856f1bb0445bd20b9da1b3f4" Jan 28 15:18:07 crc kubenswrapper[4981]: E0128 15:18:07.559784 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56ee356e8a3fee674c84487b1a64eda5dfee8813856f1bb0445bd20b9da1b3f4\": container with ID starting with 56ee356e8a3fee674c84487b1a64eda5dfee8813856f1bb0445bd20b9da1b3f4 not found: ID does not exist" containerID="56ee356e8a3fee674c84487b1a64eda5dfee8813856f1bb0445bd20b9da1b3f4" Jan 28 15:18:07 crc kubenswrapper[4981]: I0128 15:18:07.559823 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56ee356e8a3fee674c84487b1a64eda5dfee8813856f1bb0445bd20b9da1b3f4"} err="failed to get container status \"56ee356e8a3fee674c84487b1a64eda5dfee8813856f1bb0445bd20b9da1b3f4\": rpc error: code = NotFound desc = could not find container \"56ee356e8a3fee674c84487b1a64eda5dfee8813856f1bb0445bd20b9da1b3f4\": container with ID starting with 56ee356e8a3fee674c84487b1a64eda5dfee8813856f1bb0445bd20b9da1b3f4 not found: ID does not exist" Jan 28 15:18:09 crc kubenswrapper[4981]: I0128 15:18:09.329802 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1787f27a-f5e3-406a-aad4-0f3bb1a944c4" path="/var/lib/kubelet/pods/1787f27a-f5e3-406a-aad4-0f3bb1a944c4/volumes" Jan 28 15:18:12 crc kubenswrapper[4981]: I0128 15:18:12.399175 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hvd88" Jan 28 15:18:15 crc kubenswrapper[4981]: I0128 15:18:15.515571 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cpdk7" Jan 28 15:18:15 crc kubenswrapper[4981]: I0128 15:18:15.516599 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cpdk7" Jan 28 15:18:15 crc kubenswrapper[4981]: I0128 15:18:15.556789 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cpdk7" Jan 28 15:18:15 crc kubenswrapper[4981]: I0128 15:18:15.617507 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cpdk7" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.041158 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd"] Jan 28 15:18:18 crc kubenswrapper[4981]: E0128 15:18:18.041572 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1787f27a-f5e3-406a-aad4-0f3bb1a944c4" containerName="registry-server" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.041592 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1787f27a-f5e3-406a-aad4-0f3bb1a944c4" containerName="registry-server" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.041784 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="1787f27a-f5e3-406a-aad4-0f3bb1a944c4" containerName="registry-server" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.043795 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.048432 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-vdghj" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.060926 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd"] Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.216198 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8txz5\" (UniqueName: \"kubernetes.io/projected/1c96985c-93d1-4967-83e7-0794b3159ca9-kube-api-access-8txz5\") pod \"b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd\" (UID: \"1c96985c-93d1-4967-83e7-0794b3159ca9\") " pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.216607 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c96985c-93d1-4967-83e7-0794b3159ca9-util\") pod \"b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd\" (UID: \"1c96985c-93d1-4967-83e7-0794b3159ca9\") " pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.216650 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c96985c-93d1-4967-83e7-0794b3159ca9-bundle\") pod \"b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd\" (UID: \"1c96985c-93d1-4967-83e7-0794b3159ca9\") " pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.318612 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8txz5\" (UniqueName: \"kubernetes.io/projected/1c96985c-93d1-4967-83e7-0794b3159ca9-kube-api-access-8txz5\") pod \"b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd\" (UID: \"1c96985c-93d1-4967-83e7-0794b3159ca9\") " pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.318688 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c96985c-93d1-4967-83e7-0794b3159ca9-util\") pod \"b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd\" (UID: \"1c96985c-93d1-4967-83e7-0794b3159ca9\") " pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.318725 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c96985c-93d1-4967-83e7-0794b3159ca9-bundle\") pod \"b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd\" (UID: \"1c96985c-93d1-4967-83e7-0794b3159ca9\") " pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.319522 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c96985c-93d1-4967-83e7-0794b3159ca9-bundle\") pod \"b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd\" (UID: \"1c96985c-93d1-4967-83e7-0794b3159ca9\") " pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.319751 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c96985c-93d1-4967-83e7-0794b3159ca9-util\") pod \"b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd\" (UID: \"1c96985c-93d1-4967-83e7-0794b3159ca9\") " pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.348182 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8txz5\" (UniqueName: \"kubernetes.io/projected/1c96985c-93d1-4967-83e7-0794b3159ca9-kube-api-access-8txz5\") pod \"b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd\" (UID: \"1c96985c-93d1-4967-83e7-0794b3159ca9\") " pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.370830 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" Jan 28 15:18:18 crc kubenswrapper[4981]: I0128 15:18:18.644382 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd"] Jan 28 15:18:19 crc kubenswrapper[4981]: I0128 15:18:19.627311 4981 generic.go:334] "Generic (PLEG): container finished" podID="1c96985c-93d1-4967-83e7-0794b3159ca9" containerID="1c0af6a435f25d06364e38c4cde063c2e06b96a996fe389bab341ce75a4537b4" exitCode=0 Jan 28 15:18:19 crc kubenswrapper[4981]: I0128 15:18:19.627367 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" event={"ID":"1c96985c-93d1-4967-83e7-0794b3159ca9","Type":"ContainerDied","Data":"1c0af6a435f25d06364e38c4cde063c2e06b96a996fe389bab341ce75a4537b4"} Jan 28 15:18:19 crc kubenswrapper[4981]: I0128 15:18:19.627407 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" event={"ID":"1c96985c-93d1-4967-83e7-0794b3159ca9","Type":"ContainerStarted","Data":"b5883dea99ec9ae860d7b80c8bd31205f300839b4dd1f40e6b4b97bc8f321a42"} Jan 28 15:18:20 crc kubenswrapper[4981]: I0128 15:18:20.636958 4981 generic.go:334] "Generic (PLEG): container finished" podID="1c96985c-93d1-4967-83e7-0794b3159ca9" containerID="11a23ac1a37230b5c115c4558b3b660d3d026fc93ba4ddd9d5a4cf76a5a21652" exitCode=0 Jan 28 15:18:20 crc kubenswrapper[4981]: I0128 15:18:20.637521 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" event={"ID":"1c96985c-93d1-4967-83e7-0794b3159ca9","Type":"ContainerDied","Data":"11a23ac1a37230b5c115c4558b3b660d3d026fc93ba4ddd9d5a4cf76a5a21652"} Jan 28 15:18:21 crc kubenswrapper[4981]: I0128 15:18:21.649257 4981 generic.go:334] "Generic (PLEG): container finished" podID="1c96985c-93d1-4967-83e7-0794b3159ca9" containerID="e14097ce0ad2ed4bc79bd111c3d9ec5b7ac079bbacf11248a46b92d2a4b4b00f" exitCode=0 Jan 28 15:18:21 crc kubenswrapper[4981]: I0128 15:18:21.649328 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" event={"ID":"1c96985c-93d1-4967-83e7-0794b3159ca9","Type":"ContainerDied","Data":"e14097ce0ad2ed4bc79bd111c3d9ec5b7ac079bbacf11248a46b92d2a4b4b00f"} Jan 28 15:18:22 crc kubenswrapper[4981]: I0128 15:18:22.974441 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" Jan 28 15:18:23 crc kubenswrapper[4981]: I0128 15:18:23.094498 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c96985c-93d1-4967-83e7-0794b3159ca9-bundle\") pod \"1c96985c-93d1-4967-83e7-0794b3159ca9\" (UID: \"1c96985c-93d1-4967-83e7-0794b3159ca9\") " Jan 28 15:18:23 crc kubenswrapper[4981]: I0128 15:18:23.094574 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8txz5\" (UniqueName: \"kubernetes.io/projected/1c96985c-93d1-4967-83e7-0794b3159ca9-kube-api-access-8txz5\") pod \"1c96985c-93d1-4967-83e7-0794b3159ca9\" (UID: \"1c96985c-93d1-4967-83e7-0794b3159ca9\") " Jan 28 15:18:23 crc kubenswrapper[4981]: I0128 15:18:23.094643 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c96985c-93d1-4967-83e7-0794b3159ca9-util\") pod \"1c96985c-93d1-4967-83e7-0794b3159ca9\" (UID: \"1c96985c-93d1-4967-83e7-0794b3159ca9\") " Jan 28 15:18:23 crc kubenswrapper[4981]: I0128 15:18:23.095663 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c96985c-93d1-4967-83e7-0794b3159ca9-bundle" (OuterVolumeSpecName: "bundle") pod "1c96985c-93d1-4967-83e7-0794b3159ca9" (UID: "1c96985c-93d1-4967-83e7-0794b3159ca9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:18:23 crc kubenswrapper[4981]: I0128 15:18:23.100723 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c96985c-93d1-4967-83e7-0794b3159ca9-kube-api-access-8txz5" (OuterVolumeSpecName: "kube-api-access-8txz5") pod "1c96985c-93d1-4967-83e7-0794b3159ca9" (UID: "1c96985c-93d1-4967-83e7-0794b3159ca9"). InnerVolumeSpecName "kube-api-access-8txz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:18:23 crc kubenswrapper[4981]: I0128 15:18:23.109514 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c96985c-93d1-4967-83e7-0794b3159ca9-util" (OuterVolumeSpecName: "util") pod "1c96985c-93d1-4967-83e7-0794b3159ca9" (UID: "1c96985c-93d1-4967-83e7-0794b3159ca9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:18:23 crc kubenswrapper[4981]: I0128 15:18:23.195681 4981 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c96985c-93d1-4967-83e7-0794b3159ca9-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:18:23 crc kubenswrapper[4981]: I0128 15:18:23.195719 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8txz5\" (UniqueName: \"kubernetes.io/projected/1c96985c-93d1-4967-83e7-0794b3159ca9-kube-api-access-8txz5\") on node \"crc\" DevicePath \"\"" Jan 28 15:18:23 crc kubenswrapper[4981]: I0128 15:18:23.195729 4981 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c96985c-93d1-4967-83e7-0794b3159ca9-util\") on node \"crc\" DevicePath \"\"" Jan 28 15:18:23 crc kubenswrapper[4981]: I0128 15:18:23.668240 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" event={"ID":"1c96985c-93d1-4967-83e7-0794b3159ca9","Type":"ContainerDied","Data":"b5883dea99ec9ae860d7b80c8bd31205f300839b4dd1f40e6b4b97bc8f321a42"} Jan 28 15:18:23 crc kubenswrapper[4981]: I0128 15:18:23.668966 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5883dea99ec9ae860d7b80c8bd31205f300839b4dd1f40e6b4b97bc8f321a42" Jan 28 15:18:23 crc kubenswrapper[4981]: I0128 15:18:23.669069 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.152046 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vn8cw"] Jan 28 15:18:27 crc kubenswrapper[4981]: E0128 15:18:27.152530 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c96985c-93d1-4967-83e7-0794b3159ca9" containerName="extract" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.152542 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c96985c-93d1-4967-83e7-0794b3159ca9" containerName="extract" Jan 28 15:18:27 crc kubenswrapper[4981]: E0128 15:18:27.152562 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c96985c-93d1-4967-83e7-0794b3159ca9" containerName="pull" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.152568 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c96985c-93d1-4967-83e7-0794b3159ca9" containerName="pull" Jan 28 15:18:27 crc kubenswrapper[4981]: E0128 15:18:27.152577 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c96985c-93d1-4967-83e7-0794b3159ca9" containerName="util" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.152584 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c96985c-93d1-4967-83e7-0794b3159ca9" containerName="util" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.152697 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c96985c-93d1-4967-83e7-0794b3159ca9" containerName="extract" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.153514 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.200107 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vn8cw"] Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.253449 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f6ba834-a18f-4334-bd00-05b460bf018b-catalog-content\") pod \"redhat-marketplace-vn8cw\" (UID: \"5f6ba834-a18f-4334-bd00-05b460bf018b\") " pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.253506 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tnsc\" (UniqueName: \"kubernetes.io/projected/5f6ba834-a18f-4334-bd00-05b460bf018b-kube-api-access-2tnsc\") pod \"redhat-marketplace-vn8cw\" (UID: \"5f6ba834-a18f-4334-bd00-05b460bf018b\") " pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.253533 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f6ba834-a18f-4334-bd00-05b460bf018b-utilities\") pod \"redhat-marketplace-vn8cw\" (UID: \"5f6ba834-a18f-4334-bd00-05b460bf018b\") " pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.355527 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f6ba834-a18f-4334-bd00-05b460bf018b-utilities\") pod \"redhat-marketplace-vn8cw\" (UID: \"5f6ba834-a18f-4334-bd00-05b460bf018b\") " pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.356005 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f6ba834-a18f-4334-bd00-05b460bf018b-catalog-content\") pod \"redhat-marketplace-vn8cw\" (UID: \"5f6ba834-a18f-4334-bd00-05b460bf018b\") " pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.356240 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tnsc\" (UniqueName: \"kubernetes.io/projected/5f6ba834-a18f-4334-bd00-05b460bf018b-kube-api-access-2tnsc\") pod \"redhat-marketplace-vn8cw\" (UID: \"5f6ba834-a18f-4334-bd00-05b460bf018b\") " pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.356446 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f6ba834-a18f-4334-bd00-05b460bf018b-utilities\") pod \"redhat-marketplace-vn8cw\" (UID: \"5f6ba834-a18f-4334-bd00-05b460bf018b\") " pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.356499 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f6ba834-a18f-4334-bd00-05b460bf018b-catalog-content\") pod \"redhat-marketplace-vn8cw\" (UID: \"5f6ba834-a18f-4334-bd00-05b460bf018b\") " pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.381637 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tnsc\" (UniqueName: \"kubernetes.io/projected/5f6ba834-a18f-4334-bd00-05b460bf018b-kube-api-access-2tnsc\") pod \"redhat-marketplace-vn8cw\" (UID: \"5f6ba834-a18f-4334-bd00-05b460bf018b\") " pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.484725 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:27 crc kubenswrapper[4981]: I0128 15:18:27.724773 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vn8cw"] Jan 28 15:18:28 crc kubenswrapper[4981]: I0128 15:18:28.701714 4981 generic.go:334] "Generic (PLEG): container finished" podID="5f6ba834-a18f-4334-bd00-05b460bf018b" containerID="98f02145137a844216899a55a0a35c6cd2235c7ee10b2fa9bd6375a0ed360dbf" exitCode=0 Jan 28 15:18:28 crc kubenswrapper[4981]: I0128 15:18:28.701761 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vn8cw" event={"ID":"5f6ba834-a18f-4334-bd00-05b460bf018b","Type":"ContainerDied","Data":"98f02145137a844216899a55a0a35c6cd2235c7ee10b2fa9bd6375a0ed360dbf"} Jan 28 15:18:28 crc kubenswrapper[4981]: I0128 15:18:28.702026 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vn8cw" event={"ID":"5f6ba834-a18f-4334-bd00-05b460bf018b","Type":"ContainerStarted","Data":"abdcef0760f3f5ee1cd6915eeff4cacddc7a756fd493ba9971d9387408fa640f"} Jan 28 15:18:29 crc kubenswrapper[4981]: I0128 15:18:29.116088 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-86c9dcbc4-gnfc9"] Jan 28 15:18:29 crc kubenswrapper[4981]: I0128 15:18:29.116983 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-86c9dcbc4-gnfc9" Jan 28 15:18:29 crc kubenswrapper[4981]: I0128 15:18:29.119717 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-l9pdj" Jan 28 15:18:29 crc kubenswrapper[4981]: I0128 15:18:29.185659 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86c9dcbc4-gnfc9"] Jan 28 15:18:29 crc kubenswrapper[4981]: I0128 15:18:29.280488 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xf7f\" (UniqueName: \"kubernetes.io/projected/16d249a4-617e-4f09-9fca-93b89b337167-kube-api-access-9xf7f\") pod \"openstack-operator-controller-init-86c9dcbc4-gnfc9\" (UID: \"16d249a4-617e-4f09-9fca-93b89b337167\") " pod="openstack-operators/openstack-operator-controller-init-86c9dcbc4-gnfc9" Jan 28 15:18:29 crc kubenswrapper[4981]: I0128 15:18:29.381713 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xf7f\" (UniqueName: \"kubernetes.io/projected/16d249a4-617e-4f09-9fca-93b89b337167-kube-api-access-9xf7f\") pod \"openstack-operator-controller-init-86c9dcbc4-gnfc9\" (UID: \"16d249a4-617e-4f09-9fca-93b89b337167\") " pod="openstack-operators/openstack-operator-controller-init-86c9dcbc4-gnfc9" Jan 28 15:18:29 crc kubenswrapper[4981]: I0128 15:18:29.402917 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xf7f\" (UniqueName: \"kubernetes.io/projected/16d249a4-617e-4f09-9fca-93b89b337167-kube-api-access-9xf7f\") pod \"openstack-operator-controller-init-86c9dcbc4-gnfc9\" (UID: \"16d249a4-617e-4f09-9fca-93b89b337167\") " pod="openstack-operators/openstack-operator-controller-init-86c9dcbc4-gnfc9" Jan 28 15:18:29 crc kubenswrapper[4981]: I0128 15:18:29.432726 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-86c9dcbc4-gnfc9" Jan 28 15:18:29 crc kubenswrapper[4981]: I0128 15:18:29.720390 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-86c9dcbc4-gnfc9"] Jan 28 15:18:29 crc kubenswrapper[4981]: W0128 15:18:29.727678 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d249a4_617e_4f09_9fca_93b89b337167.slice/crio-cfa72431bfdaeac95dd345139cd0cbff51500a278e4c3e1558435d06a509a33b WatchSource:0}: Error finding container cfa72431bfdaeac95dd345139cd0cbff51500a278e4c3e1558435d06a509a33b: Status 404 returned error can't find the container with id cfa72431bfdaeac95dd345139cd0cbff51500a278e4c3e1558435d06a509a33b Jan 28 15:18:30 crc kubenswrapper[4981]: I0128 15:18:30.714817 4981 generic.go:334] "Generic (PLEG): container finished" podID="5f6ba834-a18f-4334-bd00-05b460bf018b" containerID="2796278c18534493275220006d8b699260c580eaa2e46749864f2a9293ddf429" exitCode=0 Jan 28 15:18:30 crc kubenswrapper[4981]: I0128 15:18:30.714984 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vn8cw" event={"ID":"5f6ba834-a18f-4334-bd00-05b460bf018b","Type":"ContainerDied","Data":"2796278c18534493275220006d8b699260c580eaa2e46749864f2a9293ddf429"} Jan 28 15:18:30 crc kubenswrapper[4981]: I0128 15:18:30.715800 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-86c9dcbc4-gnfc9" event={"ID":"16d249a4-617e-4f09-9fca-93b89b337167","Type":"ContainerStarted","Data":"cfa72431bfdaeac95dd345139cd0cbff51500a278e4c3e1558435d06a509a33b"} Jan 28 15:18:34 crc kubenswrapper[4981]: I0128 15:18:34.739907 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vn8cw" event={"ID":"5f6ba834-a18f-4334-bd00-05b460bf018b","Type":"ContainerStarted","Data":"66b816f1d621f73e499022bc3fe3958d1aeb7f333970ee8cf9ce5955c10161e2"} Jan 28 15:18:34 crc kubenswrapper[4981]: I0128 15:18:34.742387 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-86c9dcbc4-gnfc9" event={"ID":"16d249a4-617e-4f09-9fca-93b89b337167","Type":"ContainerStarted","Data":"cddcfb96c4b8c5878955f15a185cc28d70ecd0ef2518b9f8bfb7691f4b1bae0a"} Jan 28 15:18:34 crc kubenswrapper[4981]: I0128 15:18:34.742715 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-86c9dcbc4-gnfc9" Jan 28 15:18:34 crc kubenswrapper[4981]: I0128 15:18:34.760935 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vn8cw" podStartSLOduration=2.690377586 podStartE2EDuration="7.760913789s" podCreationTimestamp="2026-01-28 15:18:27 +0000 UTC" firstStartedPulling="2026-01-28 15:18:28.703476735 +0000 UTC m=+920.155634976" lastFinishedPulling="2026-01-28 15:18:33.774012938 +0000 UTC m=+925.226171179" observedRunningTime="2026-01-28 15:18:34.756136763 +0000 UTC m=+926.208295004" watchObservedRunningTime="2026-01-28 15:18:34.760913789 +0000 UTC m=+926.213072030" Jan 28 15:18:37 crc kubenswrapper[4981]: I0128 15:18:37.485213 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:37 crc kubenswrapper[4981]: I0128 15:18:37.485469 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:37 crc kubenswrapper[4981]: I0128 15:18:37.541073 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:37 crc kubenswrapper[4981]: I0128 15:18:37.557087 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-86c9dcbc4-gnfc9" podStartSLOduration=4.496595071 podStartE2EDuration="8.557069123s" podCreationTimestamp="2026-01-28 15:18:29 +0000 UTC" firstStartedPulling="2026-01-28 15:18:29.730376968 +0000 UTC m=+921.182535209" lastFinishedPulling="2026-01-28 15:18:33.79085102 +0000 UTC m=+925.243009261" observedRunningTime="2026-01-28 15:18:34.779689362 +0000 UTC m=+926.231847623" watchObservedRunningTime="2026-01-28 15:18:37.557069123 +0000 UTC m=+929.009227364" Jan 28 15:18:39 crc kubenswrapper[4981]: I0128 15:18:39.435109 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-86c9dcbc4-gnfc9" Jan 28 15:18:47 crc kubenswrapper[4981]: I0128 15:18:47.548624 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:47 crc kubenswrapper[4981]: I0128 15:18:47.607442 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vn8cw"] Jan 28 15:18:47 crc kubenswrapper[4981]: I0128 15:18:47.832972 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vn8cw" podUID="5f6ba834-a18f-4334-bd00-05b460bf018b" containerName="registry-server" containerID="cri-o://66b816f1d621f73e499022bc3fe3958d1aeb7f333970ee8cf9ce5955c10161e2" gracePeriod=2 Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.200130 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.375436 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tnsc\" (UniqueName: \"kubernetes.io/projected/5f6ba834-a18f-4334-bd00-05b460bf018b-kube-api-access-2tnsc\") pod \"5f6ba834-a18f-4334-bd00-05b460bf018b\" (UID: \"5f6ba834-a18f-4334-bd00-05b460bf018b\") " Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.376109 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f6ba834-a18f-4334-bd00-05b460bf018b-utilities\") pod \"5f6ba834-a18f-4334-bd00-05b460bf018b\" (UID: \"5f6ba834-a18f-4334-bd00-05b460bf018b\") " Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.376977 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f6ba834-a18f-4334-bd00-05b460bf018b-catalog-content\") pod \"5f6ba834-a18f-4334-bd00-05b460bf018b\" (UID: \"5f6ba834-a18f-4334-bd00-05b460bf018b\") " Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.376914 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f6ba834-a18f-4334-bd00-05b460bf018b-utilities" (OuterVolumeSpecName: "utilities") pod "5f6ba834-a18f-4334-bd00-05b460bf018b" (UID: "5f6ba834-a18f-4334-bd00-05b460bf018b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.395490 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f6ba834-a18f-4334-bd00-05b460bf018b-kube-api-access-2tnsc" (OuterVolumeSpecName: "kube-api-access-2tnsc") pod "5f6ba834-a18f-4334-bd00-05b460bf018b" (UID: "5f6ba834-a18f-4334-bd00-05b460bf018b"). InnerVolumeSpecName "kube-api-access-2tnsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.424140 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f6ba834-a18f-4334-bd00-05b460bf018b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f6ba834-a18f-4334-bd00-05b460bf018b" (UID: "5f6ba834-a18f-4334-bd00-05b460bf018b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.499148 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tnsc\" (UniqueName: \"kubernetes.io/projected/5f6ba834-a18f-4334-bd00-05b460bf018b-kube-api-access-2tnsc\") on node \"crc\" DevicePath \"\"" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.499176 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f6ba834-a18f-4334-bd00-05b460bf018b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.499203 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f6ba834-a18f-4334-bd00-05b460bf018b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.839302 4981 generic.go:334] "Generic (PLEG): container finished" podID="5f6ba834-a18f-4334-bd00-05b460bf018b" containerID="66b816f1d621f73e499022bc3fe3958d1aeb7f333970ee8cf9ce5955c10161e2" exitCode=0 Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.839334 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vn8cw" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.839344 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vn8cw" event={"ID":"5f6ba834-a18f-4334-bd00-05b460bf018b","Type":"ContainerDied","Data":"66b816f1d621f73e499022bc3fe3958d1aeb7f333970ee8cf9ce5955c10161e2"} Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.839371 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vn8cw" event={"ID":"5f6ba834-a18f-4334-bd00-05b460bf018b","Type":"ContainerDied","Data":"abdcef0760f3f5ee1cd6915eeff4cacddc7a756fd493ba9971d9387408fa640f"} Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.839395 4981 scope.go:117] "RemoveContainer" containerID="66b816f1d621f73e499022bc3fe3958d1aeb7f333970ee8cf9ce5955c10161e2" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.852263 4981 scope.go:117] "RemoveContainer" containerID="2796278c18534493275220006d8b699260c580eaa2e46749864f2a9293ddf429" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.868057 4981 scope.go:117] "RemoveContainer" containerID="98f02145137a844216899a55a0a35c6cd2235c7ee10b2fa9bd6375a0ed360dbf" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.869658 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vn8cw"] Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.874083 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vn8cw"] Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.900832 4981 scope.go:117] "RemoveContainer" containerID="66b816f1d621f73e499022bc3fe3958d1aeb7f333970ee8cf9ce5955c10161e2" Jan 28 15:18:48 crc kubenswrapper[4981]: E0128 15:18:48.901378 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66b816f1d621f73e499022bc3fe3958d1aeb7f333970ee8cf9ce5955c10161e2\": container with ID starting with 66b816f1d621f73e499022bc3fe3958d1aeb7f333970ee8cf9ce5955c10161e2 not found: ID does not exist" containerID="66b816f1d621f73e499022bc3fe3958d1aeb7f333970ee8cf9ce5955c10161e2" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.901415 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66b816f1d621f73e499022bc3fe3958d1aeb7f333970ee8cf9ce5955c10161e2"} err="failed to get container status \"66b816f1d621f73e499022bc3fe3958d1aeb7f333970ee8cf9ce5955c10161e2\": rpc error: code = NotFound desc = could not find container \"66b816f1d621f73e499022bc3fe3958d1aeb7f333970ee8cf9ce5955c10161e2\": container with ID starting with 66b816f1d621f73e499022bc3fe3958d1aeb7f333970ee8cf9ce5955c10161e2 not found: ID does not exist" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.901438 4981 scope.go:117] "RemoveContainer" containerID="2796278c18534493275220006d8b699260c580eaa2e46749864f2a9293ddf429" Jan 28 15:18:48 crc kubenswrapper[4981]: E0128 15:18:48.901859 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2796278c18534493275220006d8b699260c580eaa2e46749864f2a9293ddf429\": container with ID starting with 2796278c18534493275220006d8b699260c580eaa2e46749864f2a9293ddf429 not found: ID does not exist" containerID="2796278c18534493275220006d8b699260c580eaa2e46749864f2a9293ddf429" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.901895 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2796278c18534493275220006d8b699260c580eaa2e46749864f2a9293ddf429"} err="failed to get container status \"2796278c18534493275220006d8b699260c580eaa2e46749864f2a9293ddf429\": rpc error: code = NotFound desc = could not find container \"2796278c18534493275220006d8b699260c580eaa2e46749864f2a9293ddf429\": container with ID starting with 2796278c18534493275220006d8b699260c580eaa2e46749864f2a9293ddf429 not found: ID does not exist" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.901922 4981 scope.go:117] "RemoveContainer" containerID="98f02145137a844216899a55a0a35c6cd2235c7ee10b2fa9bd6375a0ed360dbf" Jan 28 15:18:48 crc kubenswrapper[4981]: E0128 15:18:48.902239 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98f02145137a844216899a55a0a35c6cd2235c7ee10b2fa9bd6375a0ed360dbf\": container with ID starting with 98f02145137a844216899a55a0a35c6cd2235c7ee10b2fa9bd6375a0ed360dbf not found: ID does not exist" containerID="98f02145137a844216899a55a0a35c6cd2235c7ee10b2fa9bd6375a0ed360dbf" Jan 28 15:18:48 crc kubenswrapper[4981]: I0128 15:18:48.902257 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98f02145137a844216899a55a0a35c6cd2235c7ee10b2fa9bd6375a0ed360dbf"} err="failed to get container status \"98f02145137a844216899a55a0a35c6cd2235c7ee10b2fa9bd6375a0ed360dbf\": rpc error: code = NotFound desc = could not find container \"98f02145137a844216899a55a0a35c6cd2235c7ee10b2fa9bd6375a0ed360dbf\": container with ID starting with 98f02145137a844216899a55a0a35c6cd2235c7ee10b2fa9bd6375a0ed360dbf not found: ID does not exist" Jan 28 15:18:49 crc kubenswrapper[4981]: I0128 15:18:49.330605 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f6ba834-a18f-4334-bd00-05b460bf018b" path="/var/lib/kubelet/pods/5f6ba834-a18f-4334-bd00-05b460bf018b/volumes" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.871918 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-gmjsg"] Jan 28 15:19:15 crc kubenswrapper[4981]: E0128 15:19:15.872648 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6ba834-a18f-4334-bd00-05b460bf018b" containerName="registry-server" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.872665 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6ba834-a18f-4334-bd00-05b460bf018b" containerName="registry-server" Jan 28 15:19:15 crc kubenswrapper[4981]: E0128 15:19:15.872683 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6ba834-a18f-4334-bd00-05b460bf018b" containerName="extract-utilities" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.872710 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6ba834-a18f-4334-bd00-05b460bf018b" containerName="extract-utilities" Jan 28 15:19:15 crc kubenswrapper[4981]: E0128 15:19:15.872727 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f6ba834-a18f-4334-bd00-05b460bf018b" containerName="extract-content" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.872736 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f6ba834-a18f-4334-bd00-05b460bf018b" containerName="extract-content" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.872885 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f6ba834-a18f-4334-bd00-05b460bf018b" containerName="registry-server" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.873581 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-gmjsg" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.877499 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-jx8sv" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.879318 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ddkfp"] Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.880280 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ddkfp" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.887511 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-gmjsg"] Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.887874 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qcbb9" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.913143 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ddkfp"] Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.927924 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-gmd8n"] Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.928687 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmd8n" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.932670 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-jtq54" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.947967 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-c42cn"] Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.948741 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-c42cn" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.953802 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-sqpq6" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.954239 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-gmd8n"] Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.973039 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqpdl"] Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.973904 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqpdl" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.975733 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-hp5qp" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.988864 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-c42cn"] Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.991268 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb76b\" (UniqueName: \"kubernetes.io/projected/94e20f49-bc4f-4bbb-9c67-7f3dc5b925b5-kube-api-access-gb76b\") pod \"glance-operator-controller-manager-78fdd796fd-c42cn\" (UID: \"94e20f49-bc4f-4bbb-9c67-7f3dc5b925b5\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-c42cn" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.991304 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tcvd\" (UniqueName: \"kubernetes.io/projected/e2393e50-201d-45e8-96c8-f2bfba6fed7c-kube-api-access-6tcvd\") pod \"cinder-operator-controller-manager-7478f7dbf9-ddkfp\" (UID: \"e2393e50-201d-45e8-96c8-f2bfba6fed7c\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ddkfp" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.991332 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grnd6\" (UniqueName: \"kubernetes.io/projected/5d37dcb3-e31a-40e3-ba16-803490369e86-kube-api-access-grnd6\") pod \"barbican-operator-controller-manager-7f86f8796f-gmjsg\" (UID: \"5d37dcb3-e31a-40e3-ba16-803490369e86\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-gmjsg" Jan 28 15:19:15 crc kubenswrapper[4981]: I0128 15:19:15.991399 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5qm7\" (UniqueName: \"kubernetes.io/projected/0a5805bf-96b2-4893-8811-603eacec1cba-kube-api-access-b5qm7\") pod \"designate-operator-controller-manager-b45d7bf98-gmd8n\" (UID: \"0a5805bf-96b2-4893-8811-603eacec1cba\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmd8n" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.010159 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqpdl"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.041147 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4nfrz"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.041960 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4nfrz" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.044417 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-dsrzj" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.053988 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.054781 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.058000 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.059133 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-2fczs" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.093813 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert\") pod \"infra-operator-controller-manager-694cf4f878-mk8hk\" (UID: \"28d521cc-409b-485a-809b-98e3e552c042\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.093911 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gb76b\" (UniqueName: \"kubernetes.io/projected/94e20f49-bc4f-4bbb-9c67-7f3dc5b925b5-kube-api-access-gb76b\") pod \"glance-operator-controller-manager-78fdd796fd-c42cn\" (UID: \"94e20f49-bc4f-4bbb-9c67-7f3dc5b925b5\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-c42cn" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.093942 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f46jx\" (UniqueName: \"kubernetes.io/projected/26db2e76-9e22-4b02-8c7f-6ae79127ae41-kube-api-access-f46jx\") pod \"horizon-operator-controller-manager-77d5c5b54f-4nfrz\" (UID: \"26db2e76-9e22-4b02-8c7f-6ae79127ae41\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4nfrz" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.093967 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tcvd\" (UniqueName: \"kubernetes.io/projected/e2393e50-201d-45e8-96c8-f2bfba6fed7c-kube-api-access-6tcvd\") pod \"cinder-operator-controller-manager-7478f7dbf9-ddkfp\" (UID: \"e2393e50-201d-45e8-96c8-f2bfba6fed7c\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ddkfp" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.093996 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grnd6\" (UniqueName: \"kubernetes.io/projected/5d37dcb3-e31a-40e3-ba16-803490369e86-kube-api-access-grnd6\") pod \"barbican-operator-controller-manager-7f86f8796f-gmjsg\" (UID: \"5d37dcb3-e31a-40e3-ba16-803490369e86\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-gmjsg" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.094082 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9psrx\" (UniqueName: \"kubernetes.io/projected/28d521cc-409b-485a-809b-98e3e552c042-kube-api-access-9psrx\") pod \"infra-operator-controller-manager-694cf4f878-mk8hk\" (UID: \"28d521cc-409b-485a-809b-98e3e552c042\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.094114 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5qm7\" (UniqueName: \"kubernetes.io/projected/0a5805bf-96b2-4893-8811-603eacec1cba-kube-api-access-b5qm7\") pod \"designate-operator-controller-manager-b45d7bf98-gmd8n\" (UID: \"0a5805bf-96b2-4893-8811-603eacec1cba\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmd8n" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.094154 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29lr9\" (UniqueName: \"kubernetes.io/projected/9ea05521-0dfa-4175-b394-1b5e55fc4c7f-kube-api-access-29lr9\") pod \"heat-operator-controller-manager-594c8c9d5d-mqpdl\" (UID: \"9ea05521-0dfa-4175-b394-1b5e55fc4c7f\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqpdl" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.115594 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4nfrz"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.124355 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grnd6\" (UniqueName: \"kubernetes.io/projected/5d37dcb3-e31a-40e3-ba16-803490369e86-kube-api-access-grnd6\") pod \"barbican-operator-controller-manager-7f86f8796f-gmjsg\" (UID: \"5d37dcb3-e31a-40e3-ba16-803490369e86\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-gmjsg" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.124752 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gb76b\" (UniqueName: \"kubernetes.io/projected/94e20f49-bc4f-4bbb-9c67-7f3dc5b925b5-kube-api-access-gb76b\") pod \"glance-operator-controller-manager-78fdd796fd-c42cn\" (UID: \"94e20f49-bc4f-4bbb-9c67-7f3dc5b925b5\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-c42cn" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.129379 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.141234 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5qm7\" (UniqueName: \"kubernetes.io/projected/0a5805bf-96b2-4893-8811-603eacec1cba-kube-api-access-b5qm7\") pod \"designate-operator-controller-manager-b45d7bf98-gmd8n\" (UID: \"0a5805bf-96b2-4893-8811-603eacec1cba\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmd8n" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.150073 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tcvd\" (UniqueName: \"kubernetes.io/projected/e2393e50-201d-45e8-96c8-f2bfba6fed7c-kube-api-access-6tcvd\") pod \"cinder-operator-controller-manager-7478f7dbf9-ddkfp\" (UID: \"e2393e50-201d-45e8-96c8-f2bfba6fed7c\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ddkfp" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.159968 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-wq22r"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.160962 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-wq22r" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.166552 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-gj5tt" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.187280 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.188162 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.192080 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-fqch4" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.195238 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9psrx\" (UniqueName: \"kubernetes.io/projected/28d521cc-409b-485a-809b-98e3e552c042-kube-api-access-9psrx\") pod \"infra-operator-controller-manager-694cf4f878-mk8hk\" (UID: \"28d521cc-409b-485a-809b-98e3e552c042\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.195300 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29lr9\" (UniqueName: \"kubernetes.io/projected/9ea05521-0dfa-4175-b394-1b5e55fc4c7f-kube-api-access-29lr9\") pod \"heat-operator-controller-manager-594c8c9d5d-mqpdl\" (UID: \"9ea05521-0dfa-4175-b394-1b5e55fc4c7f\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqpdl" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.195322 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert\") pod \"infra-operator-controller-manager-694cf4f878-mk8hk\" (UID: \"28d521cc-409b-485a-809b-98e3e552c042\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.195361 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f46jx\" (UniqueName: \"kubernetes.io/projected/26db2e76-9e22-4b02-8c7f-6ae79127ae41-kube-api-access-f46jx\") pod \"horizon-operator-controller-manager-77d5c5b54f-4nfrz\" (UID: \"26db2e76-9e22-4b02-8c7f-6ae79127ae41\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4nfrz" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.195408 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7v8b\" (UniqueName: \"kubernetes.io/projected/e2b38a13-7a0c-4836-9abc-be0e65837eb9-kube-api-access-x7v8b\") pod \"ironic-operator-controller-manager-598f7747c9-wq22r\" (UID: \"e2b38a13-7a0c-4836-9abc-be0e65837eb9\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-wq22r" Jan 28 15:19:16 crc kubenswrapper[4981]: E0128 15:19:16.195819 4981 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:16 crc kubenswrapper[4981]: E0128 15:19:16.195865 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert podName:28d521cc-409b-485a-809b-98e3e552c042 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:16.695849906 +0000 UTC m=+968.148008147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert") pod "infra-operator-controller-manager-694cf4f878-mk8hk" (UID: "28d521cc-409b-485a-809b-98e3e552c042") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.199304 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-gmjsg" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.206575 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-wq22r"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.216089 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ddkfp" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.220041 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29lr9\" (UniqueName: \"kubernetes.io/projected/9ea05521-0dfa-4175-b394-1b5e55fc4c7f-kube-api-access-29lr9\") pod \"heat-operator-controller-manager-594c8c9d5d-mqpdl\" (UID: \"9ea05521-0dfa-4175-b394-1b5e55fc4c7f\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqpdl" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.220113 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.220786 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f46jx\" (UniqueName: \"kubernetes.io/projected/26db2e76-9e22-4b02-8c7f-6ae79127ae41-kube-api-access-f46jx\") pod \"horizon-operator-controller-manager-77d5c5b54f-4nfrz\" (UID: \"26db2e76-9e22-4b02-8c7f-6ae79127ae41\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4nfrz" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.221558 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9psrx\" (UniqueName: \"kubernetes.io/projected/28d521cc-409b-485a-809b-98e3e552c042-kube-api-access-9psrx\") pod \"infra-operator-controller-manager-694cf4f878-mk8hk\" (UID: \"28d521cc-409b-485a-809b-98e3e552c042\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.226074 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-6gnfx"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.227334 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6gnfx" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.230696 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-fphpp" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.233634 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-6gnfx"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.240129 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.240945 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.250412 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-4p5rw" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.252055 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmd8n" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.263257 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-wjvvk"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.264057 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wjvvk" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.267365 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-splvr" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.267808 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-c42cn" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.279001 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.282468 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-wjvvk"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.293654 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqpdl" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.304774 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.305460 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.305546 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.306650 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-292mp\" (UniqueName: \"kubernetes.io/projected/05d30f3a-fdc7-4b65-a93b-747718217906-kube-api-access-292mp\") pod \"neutron-operator-controller-manager-78d58447c5-wjvvk\" (UID: \"05d30f3a-fdc7-4b65-a93b-747718217906\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wjvvk" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.306776 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j78qh\" (UniqueName: \"kubernetes.io/projected/47340953-e89f-4a20-bbd6-0e25c39b810a-kube-api-access-j78qh\") pod \"keystone-operator-controller-manager-b8b6d4659-pqffr\" (UID: \"47340953-e89f-4a20-bbd6-0e25c39b810a\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.306810 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7b62\" (UniqueName: \"kubernetes.io/projected/9f95744b-30e0-4d4f-9911-12ca57813aff-kube-api-access-j7b62\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t\" (UID: \"9f95744b-30e0-4d4f-9911-12ca57813aff\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.306844 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf4gw\" (UniqueName: \"kubernetes.io/projected/6c02a009-565d-4217-9d71-ca0505f90cb0-kube-api-access-wf4gw\") pod \"manila-operator-controller-manager-78c6999f6f-6gnfx\" (UID: \"6c02a009-565d-4217-9d71-ca0505f90cb0\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6gnfx" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.306867 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7v8b\" (UniqueName: \"kubernetes.io/projected/e2b38a13-7a0c-4836-9abc-be0e65837eb9-kube-api-access-x7v8b\") pod \"ironic-operator-controller-manager-598f7747c9-wq22r\" (UID: \"e2b38a13-7a0c-4836-9abc-be0e65837eb9\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-wq22r" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.307476 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gdnc5" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.326248 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-7tgrh"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.327010 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-7tgrh" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.333711 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-xk9bl" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.339632 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7v8b\" (UniqueName: \"kubernetes.io/projected/e2b38a13-7a0c-4836-9abc-be0e65837eb9-kube-api-access-x7v8b\") pod \"ironic-operator-controller-manager-598f7747c9-wq22r\" (UID: \"e2b38a13-7a0c-4836-9abc-be0e65837eb9\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-wq22r" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.348761 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-7tgrh"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.368115 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4nfrz" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.374528 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.375350 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.377168 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.377550 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-mbt24" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.397378 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.407696 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-8rf4r"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.409296 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8rf4r" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.409800 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-292mp\" (UniqueName: \"kubernetes.io/projected/05d30f3a-fdc7-4b65-a93b-747718217906-kube-api-access-292mp\") pod \"neutron-operator-controller-manager-78d58447c5-wjvvk\" (UID: \"05d30f3a-fdc7-4b65-a93b-747718217906\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wjvvk" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.409853 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njr8l\" (UniqueName: \"kubernetes.io/projected/462b383e-f994-4f35-a29c-6be57d7fd20c-kube-api-access-njr8l\") pod \"nova-operator-controller-manager-7bdb645866-kd8bc\" (UID: \"462b383e-f994-4f35-a29c-6be57d7fd20c\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.409883 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d88nr\" (UniqueName: \"kubernetes.io/projected/655712aa-6ff8-4f99-ac13-85a3def79e97-kube-api-access-d88nr\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qllt9\" (UID: \"655712aa-6ff8-4f99-ac13-85a3def79e97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.409958 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j78qh\" (UniqueName: \"kubernetes.io/projected/47340953-e89f-4a20-bbd6-0e25c39b810a-kube-api-access-j78qh\") pod \"keystone-operator-controller-manager-b8b6d4659-pqffr\" (UID: \"47340953-e89f-4a20-bbd6-0e25c39b810a\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.409983 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7b62\" (UniqueName: \"kubernetes.io/projected/9f95744b-30e0-4d4f-9911-12ca57813aff-kube-api-access-j7b62\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t\" (UID: \"9f95744b-30e0-4d4f-9911-12ca57813aff\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.410001 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qllt9\" (UID: \"655712aa-6ff8-4f99-ac13-85a3def79e97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.410025 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf4gw\" (UniqueName: \"kubernetes.io/projected/6c02a009-565d-4217-9d71-ca0505f90cb0-kube-api-access-wf4gw\") pod \"manila-operator-controller-manager-78c6999f6f-6gnfx\" (UID: \"6c02a009-565d-4217-9d71-ca0505f90cb0\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6gnfx" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.410059 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtnd4\" (UniqueName: \"kubernetes.io/projected/305a2f40-90a1-4e46-83a6-0ae818e35157-kube-api-access-xtnd4\") pod \"octavia-operator-controller-manager-5f4cd88d46-7tgrh\" (UID: \"305a2f40-90a1-4e46-83a6-0ae818e35157\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-7tgrh" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.421552 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-x2qmv" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.422031 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-8rf4r"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.456702 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-292mp\" (UniqueName: \"kubernetes.io/projected/05d30f3a-fdc7-4b65-a93b-747718217906-kube-api-access-292mp\") pod \"neutron-operator-controller-manager-78d58447c5-wjvvk\" (UID: \"05d30f3a-fdc7-4b65-a93b-747718217906\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wjvvk" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.457341 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j78qh\" (UniqueName: \"kubernetes.io/projected/47340953-e89f-4a20-bbd6-0e25c39b810a-kube-api-access-j78qh\") pod \"keystone-operator-controller-manager-b8b6d4659-pqffr\" (UID: \"47340953-e89f-4a20-bbd6-0e25c39b810a\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.459846 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf4gw\" (UniqueName: \"kubernetes.io/projected/6c02a009-565d-4217-9d71-ca0505f90cb0-kube-api-access-wf4gw\") pod \"manila-operator-controller-manager-78c6999f6f-6gnfx\" (UID: \"6c02a009-565d-4217-9d71-ca0505f90cb0\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6gnfx" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.464924 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.465865 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.470835 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7b62\" (UniqueName: \"kubernetes.io/projected/9f95744b-30e0-4d4f-9911-12ca57813aff-kube-api-access-j7b62\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t\" (UID: \"9f95744b-30e0-4d4f-9911-12ca57813aff\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.482982 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-q828q" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.504931 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.516469 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hb7s\" (UniqueName: \"kubernetes.io/projected/21991fd6-b7f4-48cc-b372-5e43be416857-kube-api-access-5hb7s\") pod \"placement-operator-controller-manager-79d5ccc684-v4hcv\" (UID: \"21991fd6-b7f4-48cc-b372-5e43be416857\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.516517 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njr8l\" (UniqueName: \"kubernetes.io/projected/462b383e-f994-4f35-a29c-6be57d7fd20c-kube-api-access-njr8l\") pod \"nova-operator-controller-manager-7bdb645866-kd8bc\" (UID: \"462b383e-f994-4f35-a29c-6be57d7fd20c\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.516557 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d88nr\" (UniqueName: \"kubernetes.io/projected/655712aa-6ff8-4f99-ac13-85a3def79e97-kube-api-access-d88nr\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qllt9\" (UID: \"655712aa-6ff8-4f99-ac13-85a3def79e97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.516620 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qllt9\" (UID: \"655712aa-6ff8-4f99-ac13-85a3def79e97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.516650 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97fwb\" (UniqueName: \"kubernetes.io/projected/34ddd48f-2151-4df8-af17-70b926965a9e-kube-api-access-97fwb\") pod \"ovn-operator-controller-manager-6f75f45d54-8rf4r\" (UID: \"34ddd48f-2151-4df8-af17-70b926965a9e\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8rf4r" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.516686 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtnd4\" (UniqueName: \"kubernetes.io/projected/305a2f40-90a1-4e46-83a6-0ae818e35157-kube-api-access-xtnd4\") pod \"octavia-operator-controller-manager-5f4cd88d46-7tgrh\" (UID: \"305a2f40-90a1-4e46-83a6-0ae818e35157\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-7tgrh" Jan 28 15:19:16 crc kubenswrapper[4981]: E0128 15:19:16.517649 4981 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:16 crc kubenswrapper[4981]: E0128 15:19:16.517695 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert podName:655712aa-6ff8-4f99-ac13-85a3def79e97 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:17.017679819 +0000 UTC m=+968.469838060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" (UID: "655712aa-6ff8-4f99-ac13-85a3def79e97") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.520672 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-wq22r" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.522083 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.522977 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.538030 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-kn2j8" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.556640 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njr8l\" (UniqueName: \"kubernetes.io/projected/462b383e-f994-4f35-a29c-6be57d7fd20c-kube-api-access-njr8l\") pod \"nova-operator-controller-manager-7bdb645866-kd8bc\" (UID: \"462b383e-f994-4f35-a29c-6be57d7fd20c\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.569878 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d88nr\" (UniqueName: \"kubernetes.io/projected/655712aa-6ff8-4f99-ac13-85a3def79e97-kube-api-access-d88nr\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qllt9\" (UID: \"655712aa-6ff8-4f99-ac13-85a3def79e97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.577846 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtnd4\" (UniqueName: \"kubernetes.io/projected/305a2f40-90a1-4e46-83a6-0ae818e35157-kube-api-access-xtnd4\") pod \"octavia-operator-controller-manager-5f4cd88d46-7tgrh\" (UID: \"305a2f40-90a1-4e46-83a6-0ae818e35157\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-7tgrh" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.588356 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.607850 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.624851 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97fwb\" (UniqueName: \"kubernetes.io/projected/34ddd48f-2151-4df8-af17-70b926965a9e-kube-api-access-97fwb\") pod \"ovn-operator-controller-manager-6f75f45d54-8rf4r\" (UID: \"34ddd48f-2151-4df8-af17-70b926965a9e\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8rf4r" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.624907 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxmg5\" (UniqueName: \"kubernetes.io/projected/7338b601-fe21-458b-97b8-99977fcdb582-kube-api-access-zxmg5\") pod \"swift-operator-controller-manager-547cbdb99f-6qxz5\" (UID: \"7338b601-fe21-458b-97b8-99977fcdb582\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.624995 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hb7s\" (UniqueName: \"kubernetes.io/projected/21991fd6-b7f4-48cc-b372-5e43be416857-kube-api-access-5hb7s\") pod \"placement-operator-controller-manager-79d5ccc684-v4hcv\" (UID: \"21991fd6-b7f4-48cc-b372-5e43be416857\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.625641 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6gnfx" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.650876 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97fwb\" (UniqueName: \"kubernetes.io/projected/34ddd48f-2151-4df8-af17-70b926965a9e-kube-api-access-97fwb\") pod \"ovn-operator-controller-manager-6f75f45d54-8rf4r\" (UID: \"34ddd48f-2151-4df8-af17-70b926965a9e\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8rf4r" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.659097 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hb7s\" (UniqueName: \"kubernetes.io/projected/21991fd6-b7f4-48cc-b372-5e43be416857-kube-api-access-5hb7s\") pod \"placement-operator-controller-manager-79d5ccc684-v4hcv\" (UID: \"21991fd6-b7f4-48cc-b372-5e43be416857\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.675853 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.677920 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.679308 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.685698 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-4dvqk" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.693258 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.711133 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wjvvk" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.728595 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxmg5\" (UniqueName: \"kubernetes.io/projected/7338b601-fe21-458b-97b8-99977fcdb582-kube-api-access-zxmg5\") pod \"swift-operator-controller-manager-547cbdb99f-6qxz5\" (UID: \"7338b601-fe21-458b-97b8-99977fcdb582\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.729615 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert\") pod \"infra-operator-controller-manager-694cf4f878-mk8hk\" (UID: \"28d521cc-409b-485a-809b-98e3e552c042\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:16 crc kubenswrapper[4981]: E0128 15:19:16.729775 4981 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:16 crc kubenswrapper[4981]: E0128 15:19:16.730199 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert podName:28d521cc-409b-485a-809b-98e3e552c042 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:17.729821971 +0000 UTC m=+969.181980212 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert") pod "infra-operator-controller-manager-694cf4f878-mk8hk" (UID: "28d521cc-409b-485a-809b-98e3e552c042") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.730820 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.753624 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxmg5\" (UniqueName: \"kubernetes.io/projected/7338b601-fe21-458b-97b8-99977fcdb582-kube-api-access-zxmg5\") pod \"swift-operator-controller-manager-547cbdb99f-6qxz5\" (UID: \"7338b601-fe21-458b-97b8-99977fcdb582\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.759256 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-7tgrh" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.766388 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.768290 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.770885 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-gjqmv" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.794231 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.824766 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-65v5g"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.825941 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-65v5g" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.828087 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-vdqp2" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.831307 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6q5h\" (UniqueName: \"kubernetes.io/projected/18a8ea11-fca0-4503-a458-90ae9e542401-kube-api-access-l6q5h\") pod \"telemetry-operator-controller-manager-85cd9769bb-f8ckc\" (UID: \"18a8ea11-fca0-4503-a458-90ae9e542401\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.846575 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-65v5g"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.859455 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8rf4r" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.861708 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.863031 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.868460 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.868695 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-ndrfb" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.869066 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.876491 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.876869 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.901551 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.916734 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjp7n"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.918106 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjp7n" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.919999 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-cpjwn" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.934978 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9rvs\" (UniqueName: \"kubernetes.io/projected/b4355527-cc7c-436f-a9b0-69f4860f0e36-kube-api-access-w9rvs\") pod \"test-operator-controller-manager-69797bbcbd-pj2hb\" (UID: \"b4355527-cc7c-436f-a9b0-69f4860f0e36\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.935173 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgf2n\" (UniqueName: \"kubernetes.io/projected/82289c62-674e-483e-ac47-f09b000a0c90-kube-api-access-cgf2n\") pod \"watcher-operator-controller-manager-564965969-65v5g\" (UID: \"82289c62-674e-483e-ac47-f09b000a0c90\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-65v5g" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.935340 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6q5h\" (UniqueName: \"kubernetes.io/projected/18a8ea11-fca0-4503-a458-90ae9e542401-kube-api-access-l6q5h\") pod \"telemetry-operator-controller-manager-85cd9769bb-f8ckc\" (UID: \"18a8ea11-fca0-4503-a458-90ae9e542401\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc" Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.938766 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjp7n"] Jan 28 15:19:16 crc kubenswrapper[4981]: I0128 15:19:16.963290 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6q5h\" (UniqueName: \"kubernetes.io/projected/18a8ea11-fca0-4503-a458-90ae9e542401-kube-api-access-l6q5h\") pod \"telemetry-operator-controller-manager-85cd9769bb-f8ckc\" (UID: \"18a8ea11-fca0-4503-a458-90ae9e542401\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.036427 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgf2n\" (UniqueName: \"kubernetes.io/projected/82289c62-674e-483e-ac47-f09b000a0c90-kube-api-access-cgf2n\") pod \"watcher-operator-controller-manager-564965969-65v5g\" (UID: \"82289c62-674e-483e-ac47-f09b000a0c90\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-65v5g" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.036483 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7dx6\" (UniqueName: \"kubernetes.io/projected/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-kube-api-access-h7dx6\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.036508 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.036531 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.036567 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qllt9\" (UID: \"655712aa-6ff8-4f99-ac13-85a3def79e97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.036599 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6wfb\" (UniqueName: \"kubernetes.io/projected/ad2c98b1-4994-4602-af9f-6dce33122651-kube-api-access-h6wfb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xjp7n\" (UID: \"ad2c98b1-4994-4602-af9f-6dce33122651\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjp7n" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.036628 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9rvs\" (UniqueName: \"kubernetes.io/projected/b4355527-cc7c-436f-a9b0-69f4860f0e36-kube-api-access-w9rvs\") pod \"test-operator-controller-manager-69797bbcbd-pj2hb\" (UID: \"b4355527-cc7c-436f-a9b0-69f4860f0e36\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb" Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.036903 4981 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.036952 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert podName:655712aa-6ff8-4f99-ac13-85a3def79e97 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:18.036937518 +0000 UTC m=+969.489095759 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" (UID: "655712aa-6ff8-4f99-ac13-85a3def79e97") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.054484 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9rvs\" (UniqueName: \"kubernetes.io/projected/b4355527-cc7c-436f-a9b0-69f4860f0e36-kube-api-access-w9rvs\") pod \"test-operator-controller-manager-69797bbcbd-pj2hb\" (UID: \"b4355527-cc7c-436f-a9b0-69f4860f0e36\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.055453 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgf2n\" (UniqueName: \"kubernetes.io/projected/82289c62-674e-483e-ac47-f09b000a0c90-kube-api-access-cgf2n\") pod \"watcher-operator-controller-manager-564965969-65v5g\" (UID: \"82289c62-674e-483e-ac47-f09b000a0c90\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-65v5g" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.075527 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.111755 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.143042 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.143115 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6wfb\" (UniqueName: \"kubernetes.io/projected/ad2c98b1-4994-4602-af9f-6dce33122651-kube-api-access-h6wfb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xjp7n\" (UID: \"ad2c98b1-4994-4602-af9f-6dce33122651\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjp7n" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.143211 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7dx6\" (UniqueName: \"kubernetes.io/projected/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-kube-api-access-h7dx6\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.143233 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.143330 4981 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.143375 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs podName:b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:17.643361303 +0000 UTC m=+969.095519544 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs") pod "openstack-operator-controller-manager-fcdbf6b45-9f88t" (UID: "b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74") : secret "webhook-server-cert" not found Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.143714 4981 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.143737 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs podName:b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:17.643730373 +0000 UTC m=+969.095888604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs") pod "openstack-operator-controller-manager-fcdbf6b45-9f88t" (UID: "b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74") : secret "metrics-server-cert" not found Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.173766 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7dx6\" (UniqueName: \"kubernetes.io/projected/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-kube-api-access-h7dx6\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.176759 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6wfb\" (UniqueName: \"kubernetes.io/projected/ad2c98b1-4994-4602-af9f-6dce33122651-kube-api-access-h6wfb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-xjp7n\" (UID: \"ad2c98b1-4994-4602-af9f-6dce33122651\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjp7n" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.192306 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-65v5g" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.219279 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqpdl"] Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.265973 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjp7n" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.278618 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ddkfp"] Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.285648 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4nfrz"] Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.314094 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-c42cn"] Jan 28 15:19:17 crc kubenswrapper[4981]: W0128 15:19:17.348873 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2b38a13_7a0c_4836_9abc_be0e65837eb9.slice/crio-f6d8e1454c4630688645164e0d7eedcc62ab5971023019ef18c35a74a03b2a08 WatchSource:0}: Error finding container f6d8e1454c4630688645164e0d7eedcc62ab5971023019ef18c35a74a03b2a08: Status 404 returned error can't find the container with id f6d8e1454c4630688645164e0d7eedcc62ab5971023019ef18c35a74a03b2a08 Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.367289 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-gmjsg"] Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.367323 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-wq22r"] Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.651914 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.651959 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.652289 4981 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.652428 4981 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.652437 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs podName:b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:18.652379063 +0000 UTC m=+970.104537354 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs") pod "openstack-operator-controller-manager-fcdbf6b45-9f88t" (UID: "b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74") : secret "webhook-server-cert" not found Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.652560 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs podName:b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:18.652520006 +0000 UTC m=+970.104678247 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs") pod "openstack-operator-controller-manager-fcdbf6b45-9f88t" (UID: "b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74") : secret "metrics-server-cert" not found Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.702426 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-8rf4r"] Jan 28 15:19:17 crc kubenswrapper[4981]: W0128 15:19:17.704136 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34ddd48f_2151_4df8_af17_70b926965a9e.slice/crio-5fb6a6c775752517b0be2fc6980573f878ca4c424237a386d9d6ef270b1f67fc WatchSource:0}: Error finding container 5fb6a6c775752517b0be2fc6980573f878ca4c424237a386d9d6ef270b1f67fc: Status 404 returned error can't find the container with id 5fb6a6c775752517b0be2fc6980573f878ca4c424237a386d9d6ef270b1f67fc Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.737768 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t"] Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.754328 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert\") pod \"infra-operator-controller-manager-694cf4f878-mk8hk\" (UID: \"28d521cc-409b-485a-809b-98e3e552c042\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.754791 4981 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.754865 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert podName:28d521cc-409b-485a-809b-98e3e552c042 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:19.754843064 +0000 UTC m=+971.207001355 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert") pod "infra-operator-controller-manager-694cf4f878-mk8hk" (UID: "28d521cc-409b-485a-809b-98e3e552c042") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.755199 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-gmd8n"] Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.771600 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-6gnfx"] Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.780249 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-7tgrh"] Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.799317 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-wjvvk"] Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.806229 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr"] Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.826611 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc"] Jan 28 15:19:17 crc kubenswrapper[4981]: W0128 15:19:17.830957 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4355527_cc7c_436f_a9b0_69f4860f0e36.slice/crio-9ce68a9e3ec6ade3306be86e872a79c32d5f2661e805e59283f9adf3d32770e9 WatchSource:0}: Error finding container 9ce68a9e3ec6ade3306be86e872a79c32d5f2661e805e59283f9adf3d32770e9: Status 404 returned error can't find the container with id 9ce68a9e3ec6ade3306be86e872a79c32d5f2661e805e59283f9adf3d32770e9 Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.833099 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j78qh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-pqffr_openstack-operators(47340953-e89f-4a20-bbd6-0e25c39b810a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.834266 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr" podUID="47340953-e89f-4a20-bbd6-0e25c39b810a" Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.836482 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5hb7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-v4hcv_openstack-operators(21991fd6-b7f4-48cc-b372-5e43be416857): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.836937 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zxmg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-6qxz5_openstack-operators(7338b601-fe21-458b-97b8-99977fcdb582): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.838396 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5" podUID="7338b601-fe21-458b-97b8-99977fcdb582" Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.838479 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv" podUID="21991fd6-b7f4-48cc-b372-5e43be416857" Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.841347 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9rvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-pj2hb_openstack-operators(b4355527-cc7c-436f-a9b0-69f4860f0e36): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.842516 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb"] Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.842579 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb" podUID="b4355527-cc7c-436f-a9b0-69f4860f0e36" Jan 28 15:19:17 crc kubenswrapper[4981]: W0128 15:19:17.844585 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad2c98b1_4994_4602_af9f_6dce33122651.slice/crio-6b700da4adda1a9da51f3f333f4fee6dd655223c4fe9e0ad1ac915102b02b12c WatchSource:0}: Error finding container 6b700da4adda1a9da51f3f333f4fee6dd655223c4fe9e0ad1ac915102b02b12c: Status 404 returned error can't find the container with id 6b700da4adda1a9da51f3f333f4fee6dd655223c4fe9e0ad1ac915102b02b12c Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.849592 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h6wfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-xjp7n_openstack-operators(ad2c98b1-4994-4602-af9f-6dce33122651): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:19:17 crc kubenswrapper[4981]: W0128 15:19:17.849705 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod462b383e_f994_4f35_a29c_6be57d7fd20c.slice/crio-74040d76b698c609be51b08f65360609171ec1027857d1dbc4e1331e80836375 WatchSource:0}: Error finding container 74040d76b698c609be51b08f65360609171ec1027857d1dbc4e1331e80836375: Status 404 returned error can't find the container with id 74040d76b698c609be51b08f65360609171ec1027857d1dbc4e1331e80836375 Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.849981 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5"] Jan 28 15:19:17 crc kubenswrapper[4981]: W0128 15:19:17.850519 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18a8ea11_fca0_4503_a458_90ae9e542401.slice/crio-3c508cf1dff91299de9671862942e30ec9f1396786de1f9e9a51cec6232c1562 WatchSource:0}: Error finding container 3c508cf1dff91299de9671862942e30ec9f1396786de1f9e9a51cec6232c1562: Status 404 returned error can't find the container with id 3c508cf1dff91299de9671862942e30ec9f1396786de1f9e9a51cec6232c1562 Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.850980 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjp7n" podUID="ad2c98b1-4994-4602-af9f-6dce33122651" Jan 28 15:19:17 crc kubenswrapper[4981]: W0128 15:19:17.852092 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82289c62_674e_483e_ac47_f09b000a0c90.slice/crio-f9ef3d1fd4f4c17c62b7d47af87fbfe1aa4996fcf850086a562efa142d125fee WatchSource:0}: Error finding container f9ef3d1fd4f4c17c62b7d47af87fbfe1aa4996fcf850086a562efa142d125fee: Status 404 returned error can't find the container with id f9ef3d1fd4f4c17c62b7d47af87fbfe1aa4996fcf850086a562efa142d125fee Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.852486 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-njr8l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-kd8bc_openstack-operators(462b383e-f994-4f35-a29c-6be57d7fd20c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.854641 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l6q5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-85cd9769bb-f8ckc_openstack-operators(18a8ea11-fca0-4503-a458-90ae9e542401): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.854701 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc" podUID="462b383e-f994-4f35-a29c-6be57d7fd20c" Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.855715 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc" podUID="18a8ea11-fca0-4503-a458-90ae9e542401" Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.856472 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cgf2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-65v5g_openstack-operators(82289c62-674e-483e-ac47-f09b000a0c90): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.856550 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv"] Jan 28 15:19:17 crc kubenswrapper[4981]: E0128 15:19:17.857826 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-65v5g" podUID="82289c62-674e-483e-ac47-f09b000a0c90" Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.862266 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc"] Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.868146 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjp7n"] Jan 28 15:19:17 crc kubenswrapper[4981]: I0128 15:19:17.874049 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-65v5g"] Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.029304 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t" event={"ID":"9f95744b-30e0-4d4f-9911-12ca57813aff","Type":"ContainerStarted","Data":"817ae96182c20d0772df4f48b32b3afa058f18fda0111e65d4a448b08b3a6415"} Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.032173 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wjvvk" event={"ID":"05d30f3a-fdc7-4b65-a93b-747718217906","Type":"ContainerStarted","Data":"ac67a01b8473b20541f9192d99942f51f028772993ba179b9bdeed29cd4a0e8b"} Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.033936 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ddkfp" event={"ID":"e2393e50-201d-45e8-96c8-f2bfba6fed7c","Type":"ContainerStarted","Data":"faea1eaef5f277806151396bec218eaf58b98b358597dff30533e65d89f5ade7"} Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.035640 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc" event={"ID":"18a8ea11-fca0-4503-a458-90ae9e542401","Type":"ContainerStarted","Data":"3c508cf1dff91299de9671862942e30ec9f1396786de1f9e9a51cec6232c1562"} Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.036468 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjp7n" event={"ID":"ad2c98b1-4994-4602-af9f-6dce33122651","Type":"ContainerStarted","Data":"6b700da4adda1a9da51f3f333f4fee6dd655223c4fe9e0ad1ac915102b02b12c"} Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.038237 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb" event={"ID":"b4355527-cc7c-436f-a9b0-69f4860f0e36","Type":"ContainerStarted","Data":"9ce68a9e3ec6ade3306be86e872a79c32d5f2661e805e59283f9adf3d32770e9"} Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.040477 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-7tgrh" event={"ID":"305a2f40-90a1-4e46-83a6-0ae818e35157","Type":"ContainerStarted","Data":"453c0782459982d2d743469c0138af0b4b7b4b7dca266c4ce5ada0e5e55604c8"} Jan 28 15:19:18 crc kubenswrapper[4981]: E0128 15:19:18.041114 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc" podUID="18a8ea11-fca0-4503-a458-90ae9e542401" Jan 28 15:19:18 crc kubenswrapper[4981]: E0128 15:19:18.041435 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjp7n" podUID="ad2c98b1-4994-4602-af9f-6dce33122651" Jan 28 15:19:18 crc kubenswrapper[4981]: E0128 15:19:18.041905 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb" podUID="b4355527-cc7c-436f-a9b0-69f4860f0e36" Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.044818 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-65v5g" event={"ID":"82289c62-674e-483e-ac47-f09b000a0c90","Type":"ContainerStarted","Data":"f9ef3d1fd4f4c17c62b7d47af87fbfe1aa4996fcf850086a562efa142d125fee"} Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.056233 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv" event={"ID":"21991fd6-b7f4-48cc-b372-5e43be416857","Type":"ContainerStarted","Data":"303cb41297a234e6c6cb630ea077965a88fce2afb08cf47816fcfb99cefef638"} Jan 28 15:19:18 crc kubenswrapper[4981]: E0128 15:19:18.057402 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv" podUID="21991fd6-b7f4-48cc-b372-5e43be416857" Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.059228 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qllt9\" (UID: \"655712aa-6ff8-4f99-ac13-85a3def79e97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" Jan 28 15:19:18 crc kubenswrapper[4981]: E0128 15:19:18.059449 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-65v5g" podUID="82289c62-674e-483e-ac47-f09b000a0c90" Jan 28 15:19:18 crc kubenswrapper[4981]: E0128 15:19:18.060151 4981 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:18 crc kubenswrapper[4981]: E0128 15:19:18.060228 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert podName:655712aa-6ff8-4f99-ac13-85a3def79e97 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:20.060211376 +0000 UTC m=+971.512369617 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" (UID: "655712aa-6ff8-4f99-ac13-85a3def79e97") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.064020 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-c42cn" event={"ID":"94e20f49-bc4f-4bbb-9c67-7f3dc5b925b5","Type":"ContainerStarted","Data":"998c182793e2e5f522343734f70b377d1464f86dd90ec30d21432af57258bfe4"} Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.065577 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqpdl" event={"ID":"9ea05521-0dfa-4175-b394-1b5e55fc4c7f","Type":"ContainerStarted","Data":"4cce0203fe7cafde1a96d41bc680c2dd5369270a5f91c54ae003b7b0f738b77c"} Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.068119 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmd8n" event={"ID":"0a5805bf-96b2-4893-8811-603eacec1cba","Type":"ContainerStarted","Data":"baaa80a8920037684eeb04f989cb643eb2b8113a4d4e37d4936c4b0a8e914e98"} Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.070400 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc" event={"ID":"462b383e-f994-4f35-a29c-6be57d7fd20c","Type":"ContainerStarted","Data":"74040d76b698c609be51b08f65360609171ec1027857d1dbc4e1331e80836375"} Jan 28 15:19:18 crc kubenswrapper[4981]: E0128 15:19:18.072511 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc" podUID="462b383e-f994-4f35-a29c-6be57d7fd20c" Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.086430 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-wq22r" event={"ID":"e2b38a13-7a0c-4836-9abc-be0e65837eb9","Type":"ContainerStarted","Data":"f6d8e1454c4630688645164e0d7eedcc62ab5971023019ef18c35a74a03b2a08"} Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.091384 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8rf4r" event={"ID":"34ddd48f-2151-4df8-af17-70b926965a9e","Type":"ContainerStarted","Data":"5fb6a6c775752517b0be2fc6980573f878ca4c424237a386d9d6ef270b1f67fc"} Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.104706 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr" event={"ID":"47340953-e89f-4a20-bbd6-0e25c39b810a","Type":"ContainerStarted","Data":"2b8e0f726f7dc3ec1ee4733fd3224d6fa8d54c5d614bfe255b2a1bc5c788d816"} Jan 28 15:19:18 crc kubenswrapper[4981]: E0128 15:19:18.106113 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr" podUID="47340953-e89f-4a20-bbd6-0e25c39b810a" Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.109300 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-gmjsg" event={"ID":"5d37dcb3-e31a-40e3-ba16-803490369e86","Type":"ContainerStarted","Data":"f47f1fd7e414f91ccd9f2eefefbac1ae9b75f8536213c91059b3f5610723c9e6"} Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.111510 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4nfrz" event={"ID":"26db2e76-9e22-4b02-8c7f-6ae79127ae41","Type":"ContainerStarted","Data":"0b0db6b877eb21a3486457269faff531a4b0442f8d41d2f27f16b1071fca0ff7"} Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.112676 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5" event={"ID":"7338b601-fe21-458b-97b8-99977fcdb582","Type":"ContainerStarted","Data":"72d9cbfbddf038381091cd0441728ae99ecd0bf7efeb4d8204a329e4c53028f8"} Jan 28 15:19:18 crc kubenswrapper[4981]: E0128 15:19:18.120984 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5" podUID="7338b601-fe21-458b-97b8-99977fcdb582" Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.122417 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6gnfx" event={"ID":"6c02a009-565d-4217-9d71-ca0505f90cb0","Type":"ContainerStarted","Data":"92f046aeed465e07ca6dc8267a93b8dd9aa44f87e7c62f092c6ebec0907f25d2"} Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.668913 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:18 crc kubenswrapper[4981]: I0128 15:19:18.668999 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:18 crc kubenswrapper[4981]: E0128 15:19:18.669076 4981 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:19:18 crc kubenswrapper[4981]: E0128 15:19:18.669128 4981 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:19:18 crc kubenswrapper[4981]: E0128 15:19:18.669166 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs podName:b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:20.66914457 +0000 UTC m=+972.121302901 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs") pod "openstack-operator-controller-manager-fcdbf6b45-9f88t" (UID: "b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74") : secret "webhook-server-cert" not found Jan 28 15:19:18 crc kubenswrapper[4981]: E0128 15:19:18.669203 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs podName:b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:20.669176911 +0000 UTC m=+972.121335252 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs") pod "openstack-operator-controller-manager-fcdbf6b45-9f88t" (UID: "b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74") : secret "metrics-server-cert" not found Jan 28 15:19:19 crc kubenswrapper[4981]: E0128 15:19:19.177551 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-65v5g" podUID="82289c62-674e-483e-ac47-f09b000a0c90" Jan 28 15:19:19 crc kubenswrapper[4981]: E0128 15:19:19.177603 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjp7n" podUID="ad2c98b1-4994-4602-af9f-6dce33122651" Jan 28 15:19:19 crc kubenswrapper[4981]: E0128 15:19:19.177639 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc" podUID="462b383e-f994-4f35-a29c-6be57d7fd20c" Jan 28 15:19:19 crc kubenswrapper[4981]: E0128 15:19:19.177693 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv" podUID="21991fd6-b7f4-48cc-b372-5e43be416857" Jan 28 15:19:19 crc kubenswrapper[4981]: E0128 15:19:19.177755 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:e02722d7581bfe1c5fc13e2fa6811d8665102ba86635c77547abf6b933cde127\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc" podUID="18a8ea11-fca0-4503-a458-90ae9e542401" Jan 28 15:19:19 crc kubenswrapper[4981]: E0128 15:19:19.177501 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr" podUID="47340953-e89f-4a20-bbd6-0e25c39b810a" Jan 28 15:19:19 crc kubenswrapper[4981]: E0128 15:19:19.177896 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5" podUID="7338b601-fe21-458b-97b8-99977fcdb582" Jan 28 15:19:19 crc kubenswrapper[4981]: E0128 15:19:19.177986 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb" podUID="b4355527-cc7c-436f-a9b0-69f4860f0e36" Jan 28 15:19:19 crc kubenswrapper[4981]: I0128 15:19:19.788551 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert\") pod \"infra-operator-controller-manager-694cf4f878-mk8hk\" (UID: \"28d521cc-409b-485a-809b-98e3e552c042\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:19 crc kubenswrapper[4981]: E0128 15:19:19.788735 4981 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:19 crc kubenswrapper[4981]: E0128 15:19:19.788808 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert podName:28d521cc-409b-485a-809b-98e3e552c042 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:23.788786289 +0000 UTC m=+975.240944530 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert") pod "infra-operator-controller-manager-694cf4f878-mk8hk" (UID: "28d521cc-409b-485a-809b-98e3e552c042") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:20 crc kubenswrapper[4981]: I0128 15:19:20.093119 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qllt9\" (UID: \"655712aa-6ff8-4f99-ac13-85a3def79e97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" Jan 28 15:19:20 crc kubenswrapper[4981]: E0128 15:19:20.093342 4981 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:20 crc kubenswrapper[4981]: E0128 15:19:20.093581 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert podName:655712aa-6ff8-4f99-ac13-85a3def79e97 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:24.093564094 +0000 UTC m=+975.545722335 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" (UID: "655712aa-6ff8-4f99-ac13-85a3def79e97") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:20 crc kubenswrapper[4981]: I0128 15:19:20.701243 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:20 crc kubenswrapper[4981]: I0128 15:19:20.701308 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:20 crc kubenswrapper[4981]: E0128 15:19:20.701498 4981 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:19:20 crc kubenswrapper[4981]: E0128 15:19:20.701597 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs podName:b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:24.701576473 +0000 UTC m=+976.153734714 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs") pod "openstack-operator-controller-manager-fcdbf6b45-9f88t" (UID: "b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74") : secret "metrics-server-cert" not found Jan 28 15:19:20 crc kubenswrapper[4981]: E0128 15:19:20.703074 4981 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:19:20 crc kubenswrapper[4981]: E0128 15:19:20.703117 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs podName:b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:24.703105253 +0000 UTC m=+976.155263494 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs") pod "openstack-operator-controller-manager-fcdbf6b45-9f88t" (UID: "b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74") : secret "webhook-server-cert" not found Jan 28 15:19:23 crc kubenswrapper[4981]: I0128 15:19:23.852514 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert\") pod \"infra-operator-controller-manager-694cf4f878-mk8hk\" (UID: \"28d521cc-409b-485a-809b-98e3e552c042\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:23 crc kubenswrapper[4981]: E0128 15:19:23.853362 4981 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:23 crc kubenswrapper[4981]: E0128 15:19:23.853422 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert podName:28d521cc-409b-485a-809b-98e3e552c042 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:31.853404106 +0000 UTC m=+983.305562347 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert") pod "infra-operator-controller-manager-694cf4f878-mk8hk" (UID: "28d521cc-409b-485a-809b-98e3e552c042") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:24 crc kubenswrapper[4981]: I0128 15:19:24.156860 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qllt9\" (UID: \"655712aa-6ff8-4f99-ac13-85a3def79e97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" Jan 28 15:19:24 crc kubenswrapper[4981]: E0128 15:19:24.156988 4981 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:24 crc kubenswrapper[4981]: E0128 15:19:24.157444 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert podName:655712aa-6ff8-4f99-ac13-85a3def79e97 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:32.157401745 +0000 UTC m=+983.609560026 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" (UID: "655712aa-6ff8-4f99-ac13-85a3def79e97") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 15:19:24 crc kubenswrapper[4981]: I0128 15:19:24.764101 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:24 crc kubenswrapper[4981]: I0128 15:19:24.764147 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:24 crc kubenswrapper[4981]: E0128 15:19:24.764282 4981 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:19:24 crc kubenswrapper[4981]: E0128 15:19:24.764316 4981 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 15:19:24 crc kubenswrapper[4981]: E0128 15:19:24.764373 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs podName:b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:32.764349507 +0000 UTC m=+984.216507758 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs") pod "openstack-operator-controller-manager-fcdbf6b45-9f88t" (UID: "b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74") : secret "webhook-server-cert" not found Jan 28 15:19:24 crc kubenswrapper[4981]: E0128 15:19:24.764395 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs podName:b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:32.764386058 +0000 UTC m=+984.216544309 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs") pod "openstack-operator-controller-manager-fcdbf6b45-9f88t" (UID: "b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74") : secret "metrics-server-cert" not found Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.207721 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-c42cn" event={"ID":"94e20f49-bc4f-4bbb-9c67-7f3dc5b925b5","Type":"ContainerStarted","Data":"35269c96476a2aec6852cd48832d1a55fa011408863b822036aa22bfb6910a72"} Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.209112 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-c42cn" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.211840 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8rf4r" event={"ID":"34ddd48f-2151-4df8-af17-70b926965a9e","Type":"ContainerStarted","Data":"06822b1f7c410e23490778b36542e6f06e88b004d70f4c06e4464d5019309956"} Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.212425 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8rf4r" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.216390 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ddkfp" event={"ID":"e2393e50-201d-45e8-96c8-f2bfba6fed7c","Type":"ContainerStarted","Data":"839e3c23253c94b7740b29f7c53a328b95975c9b935892ba109c44f930353d8f"} Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.216543 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ddkfp" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.218296 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4nfrz" event={"ID":"26db2e76-9e22-4b02-8c7f-6ae79127ae41","Type":"ContainerStarted","Data":"4f1825a3ae535c9ddc13752f135632c302dcad9ef5ac384341c80b825b7bc633"} Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.218688 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4nfrz" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.220761 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmd8n" event={"ID":"0a5805bf-96b2-4893-8811-603eacec1cba","Type":"ContainerStarted","Data":"cd8f411b0520891f19aa67f0f5962644ea8e832d21079be8a71c51953497b9b1"} Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.221228 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmd8n" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.225436 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6gnfx" event={"ID":"6c02a009-565d-4217-9d71-ca0505f90cb0","Type":"ContainerStarted","Data":"d0dfd04ff35eb45e04c60b84f2f4c8d2b01b45d3531a1647f0b76b49dd8acf52"} Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.225504 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6gnfx" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.227777 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-wq22r" event={"ID":"e2b38a13-7a0c-4836-9abc-be0e65837eb9","Type":"ContainerStarted","Data":"f6daecb4750267ced48d392caa62f9f61569397811aef6860033867ba352000c"} Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.228303 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-wq22r" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.229903 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-gmjsg" event={"ID":"5d37dcb3-e31a-40e3-ba16-803490369e86","Type":"ContainerStarted","Data":"410634850c1cd62ea414f1df02ac905a023719671f3bf04a9813b10dbb7245d9"} Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.230399 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-gmjsg" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.233647 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-7tgrh" event={"ID":"305a2f40-90a1-4e46-83a6-0ae818e35157","Type":"ContainerStarted","Data":"4061d59e7a8608ab2a6768df60ffb19e03b5a2aac60b6252ceaf3ae0267b8ce9"} Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.234283 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-7tgrh" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.235620 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqpdl" event={"ID":"9ea05521-0dfa-4175-b394-1b5e55fc4c7f","Type":"ContainerStarted","Data":"90f0da9b5b948fef662e60af901514b6af1a7925afc9ac6993cc10abb97b25de"} Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.235988 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqpdl" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.243490 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t" event={"ID":"9f95744b-30e0-4d4f-9911-12ca57813aff","Type":"ContainerStarted","Data":"37a68d5e18399ad0eef214af82097d2e3b00d70675b623a3ce83a930010afa5f"} Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.244116 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.245579 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wjvvk" event={"ID":"05d30f3a-fdc7-4b65-a93b-747718217906","Type":"ContainerStarted","Data":"5a188f60c16aa93730f2ba008dfe90138d20f4affefb684f5f5db851c335d49c"} Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.246081 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wjvvk" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.279263 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-c42cn" podStartSLOduration=3.111799473 podStartE2EDuration="14.279248346s" podCreationTimestamp="2026-01-28 15:19:15 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.354138899 +0000 UTC m=+968.806297140" lastFinishedPulling="2026-01-28 15:19:28.521587742 +0000 UTC m=+979.973746013" observedRunningTime="2026-01-28 15:19:29.255596237 +0000 UTC m=+980.707754478" watchObservedRunningTime="2026-01-28 15:19:29.279248346 +0000 UTC m=+980.731406587" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.282037 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6gnfx" podStartSLOduration=2.498173166 podStartE2EDuration="13.282022758s" podCreationTimestamp="2026-01-28 15:19:16 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.780723034 +0000 UTC m=+969.232881265" lastFinishedPulling="2026-01-28 15:19:28.564572616 +0000 UTC m=+980.016730857" observedRunningTime="2026-01-28 15:19:29.277949642 +0000 UTC m=+980.730107883" watchObservedRunningTime="2026-01-28 15:19:29.282022758 +0000 UTC m=+980.734180999" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.320300 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmd8n" podStartSLOduration=3.543858311 podStartE2EDuration="14.320274629s" podCreationTimestamp="2026-01-28 15:19:15 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.775053475 +0000 UTC m=+969.227211716" lastFinishedPulling="2026-01-28 15:19:28.551469793 +0000 UTC m=+980.003628034" observedRunningTime="2026-01-28 15:19:29.311463638 +0000 UTC m=+980.763621879" watchObservedRunningTime="2026-01-28 15:19:29.320274629 +0000 UTC m=+980.772432870" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.343692 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wjvvk" podStartSLOduration=2.572599492 podStartE2EDuration="13.34367143s" podCreationTimestamp="2026-01-28 15:19:16 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.780396695 +0000 UTC m=+969.232554936" lastFinishedPulling="2026-01-28 15:19:28.551468633 +0000 UTC m=+980.003626874" observedRunningTime="2026-01-28 15:19:29.34023556 +0000 UTC m=+980.792393801" watchObservedRunningTime="2026-01-28 15:19:29.34367143 +0000 UTC m=+980.795829671" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.360537 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-7tgrh" podStartSLOduration=2.568422314 podStartE2EDuration="13.360516141s" podCreationTimestamp="2026-01-28 15:19:16 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.784784431 +0000 UTC m=+969.236942672" lastFinishedPulling="2026-01-28 15:19:28.576878258 +0000 UTC m=+980.029036499" observedRunningTime="2026-01-28 15:19:29.35588773 +0000 UTC m=+980.808045981" watchObservedRunningTime="2026-01-28 15:19:29.360516141 +0000 UTC m=+980.812674382" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.391925 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-gmjsg" podStartSLOduration=3.100823992 podStartE2EDuration="14.391904082s" podCreationTimestamp="2026-01-28 15:19:15 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.320922946 +0000 UTC m=+968.773081187" lastFinishedPulling="2026-01-28 15:19:28.612003026 +0000 UTC m=+980.064161277" observedRunningTime="2026-01-28 15:19:29.387789114 +0000 UTC m=+980.839947355" watchObservedRunningTime="2026-01-28 15:19:29.391904082 +0000 UTC m=+980.844062323" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.417035 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-wq22r" podStartSLOduration=3.261284381 podStartE2EDuration="14.417012658s" podCreationTimestamp="2026-01-28 15:19:15 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.354145669 +0000 UTC m=+968.806303920" lastFinishedPulling="2026-01-28 15:19:28.509873946 +0000 UTC m=+979.962032197" observedRunningTime="2026-01-28 15:19:29.415467718 +0000 UTC m=+980.867625959" watchObservedRunningTime="2026-01-28 15:19:29.417012658 +0000 UTC m=+980.869170909" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.442524 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ddkfp" podStartSLOduration=3.272550309 podStartE2EDuration="14.442509715s" podCreationTimestamp="2026-01-28 15:19:15 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.370044537 +0000 UTC m=+968.822202778" lastFinishedPulling="2026-01-28 15:19:28.540003943 +0000 UTC m=+979.992162184" observedRunningTime="2026-01-28 15:19:29.441497449 +0000 UTC m=+980.893655700" watchObservedRunningTime="2026-01-28 15:19:29.442509715 +0000 UTC m=+980.894667956" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.481335 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqpdl" podStartSLOduration=3.189636438 podStartE2EDuration="14.48131828s" podCreationTimestamp="2026-01-28 15:19:15 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.261286051 +0000 UTC m=+968.713444292" lastFinishedPulling="2026-01-28 15:19:28.552967843 +0000 UTC m=+980.005126134" observedRunningTime="2026-01-28 15:19:29.476504704 +0000 UTC m=+980.928662945" watchObservedRunningTime="2026-01-28 15:19:29.48131828 +0000 UTC m=+980.933476521" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.515322 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t" podStartSLOduration=2.704016817 podStartE2EDuration="13.515301529s" podCreationTimestamp="2026-01-28 15:19:16 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.756165539 +0000 UTC m=+969.208323790" lastFinishedPulling="2026-01-28 15:19:28.567450261 +0000 UTC m=+980.019608502" observedRunningTime="2026-01-28 15:19:29.496572429 +0000 UTC m=+980.948730670" watchObservedRunningTime="2026-01-28 15:19:29.515301529 +0000 UTC m=+980.967459770" Jan 28 15:19:29 crc kubenswrapper[4981]: I0128 15:19:29.585759 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4nfrz" podStartSLOduration=3.397138011 podStartE2EDuration="14.585740641s" podCreationTimestamp="2026-01-28 15:19:15 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.321264145 +0000 UTC m=+968.773422386" lastFinishedPulling="2026-01-28 15:19:28.509866775 +0000 UTC m=+979.962025016" observedRunningTime="2026-01-28 15:19:29.53331985 +0000 UTC m=+980.985478091" watchObservedRunningTime="2026-01-28 15:19:29.585740641 +0000 UTC m=+981.037898882" Jan 28 15:19:31 crc kubenswrapper[4981]: I0128 15:19:31.884915 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert\") pod \"infra-operator-controller-manager-694cf4f878-mk8hk\" (UID: \"28d521cc-409b-485a-809b-98e3e552c042\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:31 crc kubenswrapper[4981]: E0128 15:19:31.885072 4981 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:31 crc kubenswrapper[4981]: E0128 15:19:31.885444 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert podName:28d521cc-409b-485a-809b-98e3e552c042 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:47.885423279 +0000 UTC m=+999.337581520 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert") pod "infra-operator-controller-manager-694cf4f878-mk8hk" (UID: "28d521cc-409b-485a-809b-98e3e552c042") : secret "infra-operator-webhook-server-cert" not found Jan 28 15:19:32 crc kubenswrapper[4981]: I0128 15:19:32.192888 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qllt9\" (UID: \"655712aa-6ff8-4f99-ac13-85a3def79e97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" Jan 28 15:19:32 crc kubenswrapper[4981]: I0128 15:19:32.200514 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/655712aa-6ff8-4f99-ac13-85a3def79e97-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854qllt9\" (UID: \"655712aa-6ff8-4f99-ac13-85a3def79e97\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" Jan 28 15:19:32 crc kubenswrapper[4981]: I0128 15:19:32.420679 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" Jan 28 15:19:32 crc kubenswrapper[4981]: I0128 15:19:32.675093 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8rf4r" podStartSLOduration=5.821615838 podStartE2EDuration="16.675074499s" podCreationTimestamp="2026-01-28 15:19:16 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.707357767 +0000 UTC m=+969.159516018" lastFinishedPulling="2026-01-28 15:19:28.560816428 +0000 UTC m=+980.012974679" observedRunningTime="2026-01-28 15:19:29.588429481 +0000 UTC m=+981.040587722" watchObservedRunningTime="2026-01-28 15:19:32.675074499 +0000 UTC m=+984.127232740" Jan 28 15:19:32 crc kubenswrapper[4981]: I0128 15:19:32.684791 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9"] Jan 28 15:19:32 crc kubenswrapper[4981]: I0128 15:19:32.801521 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:32 crc kubenswrapper[4981]: I0128 15:19:32.801591 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:32 crc kubenswrapper[4981]: E0128 15:19:32.801729 4981 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 15:19:32 crc kubenswrapper[4981]: E0128 15:19:32.801919 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs podName:b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74 nodeName:}" failed. No retries permitted until 2026-01-28 15:19:48.801892876 +0000 UTC m=+1000.254051127 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs") pod "openstack-operator-controller-manager-fcdbf6b45-9f88t" (UID: "b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74") : secret "webhook-server-cert" not found Jan 28 15:19:32 crc kubenswrapper[4981]: I0128 15:19:32.807364 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-metrics-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:33 crc kubenswrapper[4981]: I0128 15:19:33.267542 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" event={"ID":"655712aa-6ff8-4f99-ac13-85a3def79e97","Type":"ContainerStarted","Data":"e9bf07caae13877396d61c01553ce910f1d3782049a3abfa94949b560fd188e9"} Jan 28 15:19:36 crc kubenswrapper[4981]: I0128 15:19:36.205421 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-gmjsg" Jan 28 15:19:36 crc kubenswrapper[4981]: I0128 15:19:36.227856 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-ddkfp" Jan 28 15:19:36 crc kubenswrapper[4981]: I0128 15:19:36.256705 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-gmd8n" Jan 28 15:19:36 crc kubenswrapper[4981]: I0128 15:19:36.283380 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-c42cn" Jan 28 15:19:36 crc kubenswrapper[4981]: I0128 15:19:36.299755 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-mqpdl" Jan 28 15:19:36 crc kubenswrapper[4981]: I0128 15:19:36.371172 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-4nfrz" Jan 28 15:19:36 crc kubenswrapper[4981]: I0128 15:19:36.523419 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-wq22r" Jan 28 15:19:36 crc kubenswrapper[4981]: I0128 15:19:36.629675 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-6gnfx" Jan 28 15:19:36 crc kubenswrapper[4981]: I0128 15:19:36.695816 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t" Jan 28 15:19:36 crc kubenswrapper[4981]: I0128 15:19:36.723023 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-wjvvk" Jan 28 15:19:36 crc kubenswrapper[4981]: I0128 15:19:36.762677 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-7tgrh" Jan 28 15:19:36 crc kubenswrapper[4981]: I0128 15:19:36.863719 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-8rf4r" Jan 28 15:19:44 crc kubenswrapper[4981]: I0128 15:19:44.381310 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjp7n" event={"ID":"ad2c98b1-4994-4602-af9f-6dce33122651","Type":"ContainerStarted","Data":"16a071baa71300001a1683ac791dee235f3bd1dfab54a2ec5b799d5c7670cec2"} Jan 28 15:19:44 crc kubenswrapper[4981]: I0128 15:19:44.391707 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb" event={"ID":"b4355527-cc7c-436f-a9b0-69f4860f0e36","Type":"ContainerStarted","Data":"69270a81a0187fffaa277371e45584c5e8b889115050787b8aef514af38353bc"} Jan 28 15:19:44 crc kubenswrapper[4981]: I0128 15:19:44.392517 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb" Jan 28 15:19:44 crc kubenswrapper[4981]: I0128 15:19:44.403147 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-xjp7n" podStartSLOduration=2.173175775 podStartE2EDuration="28.403125529s" podCreationTimestamp="2026-01-28 15:19:16 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.849438709 +0000 UTC m=+969.301596950" lastFinishedPulling="2026-01-28 15:19:44.079388453 +0000 UTC m=+995.531546704" observedRunningTime="2026-01-28 15:19:44.397280506 +0000 UTC m=+995.849438757" watchObservedRunningTime="2026-01-28 15:19:44.403125529 +0000 UTC m=+995.855283770" Jan 28 15:19:44 crc kubenswrapper[4981]: I0128 15:19:44.422363 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb" podStartSLOduration=2.247767596 podStartE2EDuration="28.422345752s" podCreationTimestamp="2026-01-28 15:19:16 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.841238824 +0000 UTC m=+969.293397065" lastFinishedPulling="2026-01-28 15:19:44.01581699 +0000 UTC m=+995.467975221" observedRunningTime="2026-01-28 15:19:44.418607614 +0000 UTC m=+995.870765865" watchObservedRunningTime="2026-01-28 15:19:44.422345752 +0000 UTC m=+995.874503993" Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.410665 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv" event={"ID":"21991fd6-b7f4-48cc-b372-5e43be416857","Type":"ContainerStarted","Data":"7132fc28d844fe73f97a54d6cb37af609338314c070bf70148bd2784c183d939"} Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.412660 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv" Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.414734 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" event={"ID":"655712aa-6ff8-4f99-ac13-85a3def79e97","Type":"ContainerStarted","Data":"8ec8c964531cb7479ba30215ece3a0e6e4a0101895742f712049de0ec6584a9e"} Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.414892 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.415973 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc" event={"ID":"18a8ea11-fca0-4503-a458-90ae9e542401","Type":"ContainerStarted","Data":"a70d996766ccad629cbd1bf1d3f2d09a5ffe79c99c9c08b9d52636f316d537e2"} Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.416145 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc" Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.417210 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr" event={"ID":"47340953-e89f-4a20-bbd6-0e25c39b810a","Type":"ContainerStarted","Data":"3cd262f16a71e6fc06fd58780e45f66f8b47b93b0478a4f10e9f04789bf948a8"} Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.417539 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr" Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.418472 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5" event={"ID":"7338b601-fe21-458b-97b8-99977fcdb582","Type":"ContainerStarted","Data":"b6cc85d78a3721e17ff3ca0f446923364adb443b9d16b4c04346476dae16a0da"} Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.418695 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5" Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.419529 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc" event={"ID":"462b383e-f994-4f35-a29c-6be57d7fd20c","Type":"ContainerStarted","Data":"ae2cc89bfa80cdea0fbeaafc90e6c94f76e91fe5883ca37cca578e39f5e11d33"} Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.419858 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc" Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.421565 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-65v5g" event={"ID":"82289c62-674e-483e-ac47-f09b000a0c90","Type":"ContainerStarted","Data":"936026de90bd8f88edb5b95aeff7c737451b8ed8390c5eab650449f5fe6162d7"} Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.422133 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-65v5g" Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.429633 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv" podStartSLOduration=3.797871241 podStartE2EDuration="29.429621733s" podCreationTimestamp="2026-01-28 15:19:16 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.835899483 +0000 UTC m=+969.288057724" lastFinishedPulling="2026-01-28 15:19:43.467649975 +0000 UTC m=+994.919808216" observedRunningTime="2026-01-28 15:19:45.426361188 +0000 UTC m=+996.878519429" watchObservedRunningTime="2026-01-28 15:19:45.429621733 +0000 UTC m=+996.881779974" Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.441767 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc" podStartSLOduration=3.323277432 podStartE2EDuration="29.44175695s" podCreationTimestamp="2026-01-28 15:19:16 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.854501792 +0000 UTC m=+969.306660033" lastFinishedPulling="2026-01-28 15:19:43.9729813 +0000 UTC m=+995.425139551" observedRunningTime="2026-01-28 15:19:45.440897248 +0000 UTC m=+996.893055489" watchObservedRunningTime="2026-01-28 15:19:45.44175695 +0000 UTC m=+996.893915191" Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.463495 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5" podStartSLOduration=4.24486956 podStartE2EDuration="29.463472058s" podCreationTimestamp="2026-01-28 15:19:16 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.835977415 +0000 UTC m=+969.288135656" lastFinishedPulling="2026-01-28 15:19:43.054579913 +0000 UTC m=+994.506738154" observedRunningTime="2026-01-28 15:19:45.458518729 +0000 UTC m=+996.910676960" watchObservedRunningTime="2026-01-28 15:19:45.463472058 +0000 UTC m=+996.915630309" Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.478329 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-65v5g" podStartSLOduration=3.311148366 podStartE2EDuration="29.478303046s" podCreationTimestamp="2026-01-28 15:19:16 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.856238588 +0000 UTC m=+969.308396839" lastFinishedPulling="2026-01-28 15:19:44.023393278 +0000 UTC m=+995.475551519" observedRunningTime="2026-01-28 15:19:45.473932632 +0000 UTC m=+996.926090873" watchObservedRunningTime="2026-01-28 15:19:45.478303046 +0000 UTC m=+996.930461297" Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.524307 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" podStartSLOduration=18.237467718 podStartE2EDuration="29.524287479s" podCreationTimestamp="2026-01-28 15:19:16 +0000 UTC" firstStartedPulling="2026-01-28 15:19:32.686032796 +0000 UTC m=+984.138191037" lastFinishedPulling="2026-01-28 15:19:43.972852547 +0000 UTC m=+995.425010798" observedRunningTime="2026-01-28 15:19:45.521871275 +0000 UTC m=+996.974029516" watchObservedRunningTime="2026-01-28 15:19:45.524287479 +0000 UTC m=+996.976445720" Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.527308 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc" podStartSLOduration=3.360829034 podStartE2EDuration="29.527296107s" podCreationTimestamp="2026-01-28 15:19:16 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.852395197 +0000 UTC m=+969.304553438" lastFinishedPulling="2026-01-28 15:19:44.01886226 +0000 UTC m=+995.471020511" observedRunningTime="2026-01-28 15:19:45.494925451 +0000 UTC m=+996.947083692" watchObservedRunningTime="2026-01-28 15:19:45.527296107 +0000 UTC m=+996.979454348" Jan 28 15:19:45 crc kubenswrapper[4981]: I0128 15:19:45.538220 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr" podStartSLOduration=4.280257975 podStartE2EDuration="30.538202513s" podCreationTimestamp="2026-01-28 15:19:15 +0000 UTC" firstStartedPulling="2026-01-28 15:19:17.832626127 +0000 UTC m=+969.284784368" lastFinishedPulling="2026-01-28 15:19:44.090570675 +0000 UTC m=+995.542728906" observedRunningTime="2026-01-28 15:19:45.537897915 +0000 UTC m=+996.990056156" watchObservedRunningTime="2026-01-28 15:19:45.538202513 +0000 UTC m=+996.990360754" Jan 28 15:19:47 crc kubenswrapper[4981]: I0128 15:19:47.954905 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert\") pod \"infra-operator-controller-manager-694cf4f878-mk8hk\" (UID: \"28d521cc-409b-485a-809b-98e3e552c042\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:47 crc kubenswrapper[4981]: I0128 15:19:47.959886 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28d521cc-409b-485a-809b-98e3e552c042-cert\") pod \"infra-operator-controller-manager-694cf4f878-mk8hk\" (UID: \"28d521cc-409b-485a-809b-98e3e552c042\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:48 crc kubenswrapper[4981]: I0128 15:19:48.196440 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:48 crc kubenswrapper[4981]: I0128 15:19:48.678686 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk"] Jan 28 15:19:48 crc kubenswrapper[4981]: I0128 15:19:48.870554 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:48 crc kubenswrapper[4981]: I0128 15:19:48.877716 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74-webhook-certs\") pod \"openstack-operator-controller-manager-fcdbf6b45-9f88t\" (UID: \"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74\") " pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:49 crc kubenswrapper[4981]: I0128 15:19:49.004218 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:49 crc kubenswrapper[4981]: I0128 15:19:49.451821 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" event={"ID":"28d521cc-409b-485a-809b-98e3e552c042","Type":"ContainerStarted","Data":"cd52b0c3fff4f075b58eea96ebf7e3fdcdff34c78a92f9e13d72a3bb33210c7f"} Jan 28 15:19:49 crc kubenswrapper[4981]: I0128 15:19:49.890010 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t"] Jan 28 15:19:49 crc kubenswrapper[4981]: I0128 15:19:49.897082 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:19:49 crc kubenswrapper[4981]: I0128 15:19:49.897131 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:19:50 crc kubenswrapper[4981]: I0128 15:19:50.459556 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" event={"ID":"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74","Type":"ContainerStarted","Data":"d389a9b36ef3d32566be35fae6fde7123fc653fd34851f02693741d4b71a5751"} Jan 28 15:19:50 crc kubenswrapper[4981]: I0128 15:19:50.459611 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" event={"ID":"b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74","Type":"ContainerStarted","Data":"688793671a9a5d51e3a117c6ea6103e8d67c36d3b1b7cb19519812040cc6304b"} Jan 28 15:19:50 crc kubenswrapper[4981]: I0128 15:19:50.459752 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:19:51 crc kubenswrapper[4981]: I0128 15:19:51.470016 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" event={"ID":"28d521cc-409b-485a-809b-98e3e552c042","Type":"ContainerStarted","Data":"fb19bd5e7e7c6c2ec83a6338b9071b6efdd718c801013450905085b3b4f5ca45"} Jan 28 15:19:51 crc kubenswrapper[4981]: I0128 15:19:51.493629 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" podStartSLOduration=34.20413659 podStartE2EDuration="36.493610332s" podCreationTimestamp="2026-01-28 15:19:15 +0000 UTC" firstStartedPulling="2026-01-28 15:19:48.678239367 +0000 UTC m=+1000.130397658" lastFinishedPulling="2026-01-28 15:19:50.967713159 +0000 UTC m=+1002.419871400" observedRunningTime="2026-01-28 15:19:51.48934869 +0000 UTC m=+1002.941506951" watchObservedRunningTime="2026-01-28 15:19:51.493610332 +0000 UTC m=+1002.945768593" Jan 28 15:19:51 crc kubenswrapper[4981]: I0128 15:19:51.497645 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" podStartSLOduration=35.497625497 podStartE2EDuration="35.497625497s" podCreationTimestamp="2026-01-28 15:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:19:50.497066821 +0000 UTC m=+1001.949225062" watchObservedRunningTime="2026-01-28 15:19:51.497625497 +0000 UTC m=+1002.949783748" Jan 28 15:19:52 crc kubenswrapper[4981]: I0128 15:19:52.430602 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854qllt9" Jan 28 15:19:52 crc kubenswrapper[4981]: I0128 15:19:52.480520 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:56 crc kubenswrapper[4981]: I0128 15:19:56.612622 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-pqffr" Jan 28 15:19:56 crc kubenswrapper[4981]: I0128 15:19:56.734789 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-kd8bc" Jan 28 15:19:56 crc kubenswrapper[4981]: I0128 15:19:56.879538 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-v4hcv" Jan 28 15:19:56 crc kubenswrapper[4981]: I0128 15:19:56.948308 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-6qxz5" Jan 28 15:19:57 crc kubenswrapper[4981]: I0128 15:19:57.077800 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-f8ckc" Jan 28 15:19:57 crc kubenswrapper[4981]: I0128 15:19:57.114719 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-pj2hb" Jan 28 15:19:57 crc kubenswrapper[4981]: I0128 15:19:57.195952 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-65v5g" Jan 28 15:19:58 crc kubenswrapper[4981]: I0128 15:19:58.209312 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-mk8hk" Jan 28 15:19:59 crc kubenswrapper[4981]: I0128 15:19:59.011857 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-fcdbf6b45-9f88t" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.351241 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f4ss6"] Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.353658 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f4ss6" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.356070 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.356492 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-28dw6" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.356598 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.356709 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.357951 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f4ss6"] Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.417460 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x4fdv"] Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.420018 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.428098 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x4fdv"] Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.428105 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.486283 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98ttc\" (UniqueName: \"kubernetes.io/projected/ffafe79d-4b61-4389-8e01-c1b38adf311e-kube-api-access-98ttc\") pod \"dnsmasq-dns-675f4bcbfc-f4ss6\" (UID: \"ffafe79d-4b61-4389-8e01-c1b38adf311e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f4ss6" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.486610 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffafe79d-4b61-4389-8e01-c1b38adf311e-config\") pod \"dnsmasq-dns-675f4bcbfc-f4ss6\" (UID: \"ffafe79d-4b61-4389-8e01-c1b38adf311e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f4ss6" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.587478 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ws74\" (UniqueName: \"kubernetes.io/projected/8fef5781-962f-4e98-866b-37c6e503259b-kube-api-access-7ws74\") pod \"dnsmasq-dns-78dd6ddcc-x4fdv\" (UID: \"8fef5781-962f-4e98-866b-37c6e503259b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.587531 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffafe79d-4b61-4389-8e01-c1b38adf311e-config\") pod \"dnsmasq-dns-675f4bcbfc-f4ss6\" (UID: \"ffafe79d-4b61-4389-8e01-c1b38adf311e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f4ss6" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.587557 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fef5781-962f-4e98-866b-37c6e503259b-config\") pod \"dnsmasq-dns-78dd6ddcc-x4fdv\" (UID: \"8fef5781-962f-4e98-866b-37c6e503259b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.587695 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98ttc\" (UniqueName: \"kubernetes.io/projected/ffafe79d-4b61-4389-8e01-c1b38adf311e-kube-api-access-98ttc\") pod \"dnsmasq-dns-675f4bcbfc-f4ss6\" (UID: \"ffafe79d-4b61-4389-8e01-c1b38adf311e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f4ss6" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.587772 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fef5781-962f-4e98-866b-37c6e503259b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x4fdv\" (UID: \"8fef5781-962f-4e98-866b-37c6e503259b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.588468 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffafe79d-4b61-4389-8e01-c1b38adf311e-config\") pod \"dnsmasq-dns-675f4bcbfc-f4ss6\" (UID: \"ffafe79d-4b61-4389-8e01-c1b38adf311e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f4ss6" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.610940 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98ttc\" (UniqueName: \"kubernetes.io/projected/ffafe79d-4b61-4389-8e01-c1b38adf311e-kube-api-access-98ttc\") pod \"dnsmasq-dns-675f4bcbfc-f4ss6\" (UID: \"ffafe79d-4b61-4389-8e01-c1b38adf311e\") " pod="openstack/dnsmasq-dns-675f4bcbfc-f4ss6" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.680920 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f4ss6" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.688571 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ws74\" (UniqueName: \"kubernetes.io/projected/8fef5781-962f-4e98-866b-37c6e503259b-kube-api-access-7ws74\") pod \"dnsmasq-dns-78dd6ddcc-x4fdv\" (UID: \"8fef5781-962f-4e98-866b-37c6e503259b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.688717 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fef5781-962f-4e98-866b-37c6e503259b-config\") pod \"dnsmasq-dns-78dd6ddcc-x4fdv\" (UID: \"8fef5781-962f-4e98-866b-37c6e503259b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.689645 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fef5781-962f-4e98-866b-37c6e503259b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x4fdv\" (UID: \"8fef5781-962f-4e98-866b-37c6e503259b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.689578 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fef5781-962f-4e98-866b-37c6e503259b-config\") pod \"dnsmasq-dns-78dd6ddcc-x4fdv\" (UID: \"8fef5781-962f-4e98-866b-37c6e503259b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.690456 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fef5781-962f-4e98-866b-37c6e503259b-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x4fdv\" (UID: \"8fef5781-962f-4e98-866b-37c6e503259b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.709445 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ws74\" (UniqueName: \"kubernetes.io/projected/8fef5781-962f-4e98-866b-37c6e503259b-kube-api-access-7ws74\") pod \"dnsmasq-dns-78dd6ddcc-x4fdv\" (UID: \"8fef5781-962f-4e98-866b-37c6e503259b\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" Jan 28 15:20:16 crc kubenswrapper[4981]: I0128 15:20:16.735649 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" Jan 28 15:20:17 crc kubenswrapper[4981]: I0128 15:20:17.148462 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f4ss6"] Jan 28 15:20:17 crc kubenswrapper[4981]: I0128 15:20:17.160998 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:20:17 crc kubenswrapper[4981]: W0128 15:20:17.205037 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fef5781_962f_4e98_866b_37c6e503259b.slice/crio-6503203be8fd4b9b9d697f88cc24640d9ef4bdc77de12b7f87b4cfde8f1bcaf8 WatchSource:0}: Error finding container 6503203be8fd4b9b9d697f88cc24640d9ef4bdc77de12b7f87b4cfde8f1bcaf8: Status 404 returned error can't find the container with id 6503203be8fd4b9b9d697f88cc24640d9ef4bdc77de12b7f87b4cfde8f1bcaf8 Jan 28 15:20:17 crc kubenswrapper[4981]: I0128 15:20:17.206013 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x4fdv"] Jan 28 15:20:17 crc kubenswrapper[4981]: I0128 15:20:17.706329 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-f4ss6" event={"ID":"ffafe79d-4b61-4389-8e01-c1b38adf311e","Type":"ContainerStarted","Data":"60b3e45d755fb50f7ccaa7e7d7962149df4ba74f5d44fac0c85a4133d1b93f63"} Jan 28 15:20:17 crc kubenswrapper[4981]: I0128 15:20:17.707454 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" event={"ID":"8fef5781-962f-4e98-866b-37c6e503259b","Type":"ContainerStarted","Data":"6503203be8fd4b9b9d697f88cc24640d9ef4bdc77de12b7f87b4cfde8f1bcaf8"} Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.134699 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f4ss6"] Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.157680 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bp7bz"] Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.161569 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.172255 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bp7bz"] Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.234305 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45xlh\" (UniqueName: \"kubernetes.io/projected/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-kube-api-access-45xlh\") pod \"dnsmasq-dns-666b6646f7-bp7bz\" (UID: \"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5\") " pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.234347 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-config\") pod \"dnsmasq-dns-666b6646f7-bp7bz\" (UID: \"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5\") " pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.234384 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bp7bz\" (UID: \"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5\") " pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.337082 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45xlh\" (UniqueName: \"kubernetes.io/projected/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-kube-api-access-45xlh\") pod \"dnsmasq-dns-666b6646f7-bp7bz\" (UID: \"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5\") " pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.337144 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-config\") pod \"dnsmasq-dns-666b6646f7-bp7bz\" (UID: \"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5\") " pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.337271 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bp7bz\" (UID: \"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5\") " pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.338082 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-dns-svc\") pod \"dnsmasq-dns-666b6646f7-bp7bz\" (UID: \"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5\") " pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.338318 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-config\") pod \"dnsmasq-dns-666b6646f7-bp7bz\" (UID: \"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5\") " pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.384268 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45xlh\" (UniqueName: \"kubernetes.io/projected/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-kube-api-access-45xlh\") pod \"dnsmasq-dns-666b6646f7-bp7bz\" (UID: \"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5\") " pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.403064 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x4fdv"] Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.434505 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fwnkd"] Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.437368 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.446946 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fwnkd"] Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.483753 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.549026 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c96d9d-6274-4310-a9c3-855a38413dda-config\") pod \"dnsmasq-dns-57d769cc4f-fwnkd\" (UID: \"c8c96d9d-6274-4310-a9c3-855a38413dda\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.549069 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvs97\" (UniqueName: \"kubernetes.io/projected/c8c96d9d-6274-4310-a9c3-855a38413dda-kube-api-access-nvs97\") pod \"dnsmasq-dns-57d769cc4f-fwnkd\" (UID: \"c8c96d9d-6274-4310-a9c3-855a38413dda\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.549104 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8c96d9d-6274-4310-a9c3-855a38413dda-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fwnkd\" (UID: \"c8c96d9d-6274-4310-a9c3-855a38413dda\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.669532 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8c96d9d-6274-4310-a9c3-855a38413dda-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fwnkd\" (UID: \"c8c96d9d-6274-4310-a9c3-855a38413dda\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.669650 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c96d9d-6274-4310-a9c3-855a38413dda-config\") pod \"dnsmasq-dns-57d769cc4f-fwnkd\" (UID: \"c8c96d9d-6274-4310-a9c3-855a38413dda\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.669669 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvs97\" (UniqueName: \"kubernetes.io/projected/c8c96d9d-6274-4310-a9c3-855a38413dda-kube-api-access-nvs97\") pod \"dnsmasq-dns-57d769cc4f-fwnkd\" (UID: \"c8c96d9d-6274-4310-a9c3-855a38413dda\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.672628 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8c96d9d-6274-4310-a9c3-855a38413dda-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-fwnkd\" (UID: \"c8c96d9d-6274-4310-a9c3-855a38413dda\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.673524 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c96d9d-6274-4310-a9c3-855a38413dda-config\") pod \"dnsmasq-dns-57d769cc4f-fwnkd\" (UID: \"c8c96d9d-6274-4310-a9c3-855a38413dda\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.703246 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvs97\" (UniqueName: \"kubernetes.io/projected/c8c96d9d-6274-4310-a9c3-855a38413dda-kube-api-access-nvs97\") pod \"dnsmasq-dns-57d769cc4f-fwnkd\" (UID: \"c8c96d9d-6274-4310-a9c3-855a38413dda\") " pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.756699 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.899969 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:20:19 crc kubenswrapper[4981]: I0128 15:20:19.900021 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.006747 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bp7bz"] Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.294227 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.296234 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.298027 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.302571 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.302817 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.302979 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.303116 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.303393 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.306713 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mlcst" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.310521 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.482420 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.482486 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.482513 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.482606 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.482647 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.482675 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6456c27c-6d70-453b-a759-b6411aa67f51-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.482691 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-config-data\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.482706 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9n7r\" (UniqueName: \"kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-kube-api-access-k9n7r\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.482727 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6456c27c-6d70-453b-a759-b6411aa67f51-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.482766 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.482784 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.566363 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.569620 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.572723 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.574152 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.578036 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.578314 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.578408 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-w9wlh" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.578494 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.579158 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.579553 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.586042 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.586097 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.586130 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.586176 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.586233 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.586295 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6456c27c-6d70-453b-a759-b6411aa67f51-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.586327 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-config-data\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.586362 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9n7r\" (UniqueName: \"kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-kube-api-access-k9n7r\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.586390 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6456c27c-6d70-453b-a759-b6411aa67f51-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.586435 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.586461 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.587687 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.589261 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.589750 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.590049 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-config-data\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.590751 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.591018 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.592649 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.611009 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6456c27c-6d70-453b-a759-b6411aa67f51-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.612934 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6456c27c-6d70-453b-a759-b6411aa67f51-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.613035 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9n7r\" (UniqueName: \"kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-kube-api-access-k9n7r\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.619800 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.639632 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " pod="openstack/rabbitmq-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.689240 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.689511 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.689583 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.689646 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.689789 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.689842 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.690004 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frkdc\" (UniqueName: \"kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-kube-api-access-frkdc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.690130 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.690225 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cccad1c-80c8-4806-a093-ecb1ad203f3c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.690338 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cccad1c-80c8-4806-a093-ecb1ad203f3c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.690443 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.791784 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.791842 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cccad1c-80c8-4806-a093-ecb1ad203f3c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.791875 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cccad1c-80c8-4806-a093-ecb1ad203f3c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.792713 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.792768 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.792799 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.792842 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.792858 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.792874 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.792889 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.792974 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frkdc\" (UniqueName: \"kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-kube-api-access-frkdc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.794318 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.794909 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.795781 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.796394 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.797219 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.797728 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cccad1c-80c8-4806-a093-ecb1ad203f3c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.799070 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.800361 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.804851 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.807703 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frkdc\" (UniqueName: \"kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-kube-api-access-frkdc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.833623 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cccad1c-80c8-4806-a093-ecb1ad203f3c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.838615 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.913754 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:20:20 crc kubenswrapper[4981]: I0128 15:20:20.930624 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.832253 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.835652 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.838983 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-8lr2w" Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.839216 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.839577 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.841633 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.841687 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.848715 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.909395 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.911706 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.911861 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-kolla-config\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.911949 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-config-data-default\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.912023 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.912048 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkj9x\" (UniqueName: \"kubernetes.io/projected/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-kube-api-access-gkj9x\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.912081 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:21 crc kubenswrapper[4981]: I0128 15:20:21.912110 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.014172 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-config-data-default\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.014337 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.014397 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkj9x\" (UniqueName: \"kubernetes.io/projected/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-kube-api-access-gkj9x\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.014474 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.014553 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.014672 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.014742 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.014833 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-kolla-config\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.015699 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-operator-scripts\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.015913 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-config-data-default\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.016525 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-config-data-generated\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.016735 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.020429 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.020675 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-kolla-config\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.031213 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkj9x\" (UniqueName: \"kubernetes.io/projected/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-kube-api-access-gkj9x\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.031721 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee506ff0-7634-45eb-ac9f-5d5de1b3c40a-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.040464 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a\") " pod="openstack/openstack-galera-0" Jan 28 15:20:22 crc kubenswrapper[4981]: I0128 15:20:22.154650 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.185753 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.187056 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.205276 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.205727 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.205923 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-np5bb" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.206041 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.206052 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.332402 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.332469 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.332500 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.332626 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.332691 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.332734 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8brx\" (UniqueName: \"kubernetes.io/projected/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-kube-api-access-d8brx\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.332762 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.332823 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.342567 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.343758 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.345397 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.345882 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-l2zfd" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.345891 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.365961 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.434602 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bab2457-dbba-4fa0-b0c7-0b05a9546bc6-kolla-config\") pod \"memcached-0\" (UID: \"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6\") " pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.434671 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.434706 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8fw7\" (UniqueName: \"kubernetes.io/projected/3bab2457-dbba-4fa0-b0c7-0b05a9546bc6-kube-api-access-n8fw7\") pod \"memcached-0\" (UID: \"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6\") " pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.434730 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.434751 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.434770 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bab2457-dbba-4fa0-b0c7-0b05a9546bc6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6\") " pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.434799 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.434822 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.434840 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bab2457-dbba-4fa0-b0c7-0b05a9546bc6-config-data\") pod \"memcached-0\" (UID: \"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6\") " pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.434863 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8brx\" (UniqueName: \"kubernetes.io/projected/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-kube-api-access-d8brx\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.434883 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.434898 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab2457-dbba-4fa0-b0c7-0b05a9546bc6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6\") " pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.434925 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.435944 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.436243 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.436394 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.438963 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.439334 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.444171 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.453323 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.453324 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8brx\" (UniqueName: \"kubernetes.io/projected/aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd-kube-api-access-d8brx\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.464270 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd\") " pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.521203 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.536395 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bab2457-dbba-4fa0-b0c7-0b05a9546bc6-kolla-config\") pod \"memcached-0\" (UID: \"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6\") " pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.536494 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8fw7\" (UniqueName: \"kubernetes.io/projected/3bab2457-dbba-4fa0-b0c7-0b05a9546bc6-kube-api-access-n8fw7\") pod \"memcached-0\" (UID: \"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6\") " pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.536542 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bab2457-dbba-4fa0-b0c7-0b05a9546bc6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6\") " pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.536593 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bab2457-dbba-4fa0-b0c7-0b05a9546bc6-config-data\") pod \"memcached-0\" (UID: \"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6\") " pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.536634 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab2457-dbba-4fa0-b0c7-0b05a9546bc6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6\") " pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.538015 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3bab2457-dbba-4fa0-b0c7-0b05a9546bc6-kolla-config\") pod \"memcached-0\" (UID: \"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6\") " pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.538106 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3bab2457-dbba-4fa0-b0c7-0b05a9546bc6-config-data\") pod \"memcached-0\" (UID: \"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6\") " pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.540522 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bab2457-dbba-4fa0-b0c7-0b05a9546bc6-memcached-tls-certs\") pod \"memcached-0\" (UID: \"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6\") " pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.540536 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bab2457-dbba-4fa0-b0c7-0b05a9546bc6-combined-ca-bundle\") pod \"memcached-0\" (UID: \"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6\") " pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.554426 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8fw7\" (UniqueName: \"kubernetes.io/projected/3bab2457-dbba-4fa0-b0c7-0b05a9546bc6-kube-api-access-n8fw7\") pod \"memcached-0\" (UID: \"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6\") " pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.657404 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 15:20:23 crc kubenswrapper[4981]: I0128 15:20:23.809180 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" event={"ID":"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5","Type":"ContainerStarted","Data":"09e511056b8f2fd02d8d71dca480851293b8e689e48c717f28f243afe13771bd"} Jan 28 15:20:25 crc kubenswrapper[4981]: I0128 15:20:25.494388 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:20:25 crc kubenswrapper[4981]: I0128 15:20:25.496091 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 15:20:25 crc kubenswrapper[4981]: I0128 15:20:25.497763 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-nchz6" Jan 28 15:20:25 crc kubenswrapper[4981]: I0128 15:20:25.505344 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:20:25 crc kubenswrapper[4981]: I0128 15:20:25.568728 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxd4g\" (UniqueName: \"kubernetes.io/projected/756f0fb3-a2dc-4084-bd40-85fa0bf855bd-kube-api-access-cxd4g\") pod \"kube-state-metrics-0\" (UID: \"756f0fb3-a2dc-4084-bd40-85fa0bf855bd\") " pod="openstack/kube-state-metrics-0" Jan 28 15:20:25 crc kubenswrapper[4981]: I0128 15:20:25.670105 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxd4g\" (UniqueName: \"kubernetes.io/projected/756f0fb3-a2dc-4084-bd40-85fa0bf855bd-kube-api-access-cxd4g\") pod \"kube-state-metrics-0\" (UID: \"756f0fb3-a2dc-4084-bd40-85fa0bf855bd\") " pod="openstack/kube-state-metrics-0" Jan 28 15:20:25 crc kubenswrapper[4981]: I0128 15:20:25.719025 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxd4g\" (UniqueName: \"kubernetes.io/projected/756f0fb3-a2dc-4084-bd40-85fa0bf855bd-kube-api-access-cxd4g\") pod \"kube-state-metrics-0\" (UID: \"756f0fb3-a2dc-4084-bd40-85fa0bf855bd\") " pod="openstack/kube-state-metrics-0" Jan 28 15:20:25 crc kubenswrapper[4981]: I0128 15:20:25.878494 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.799752 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bnkpb"] Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.800984 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.802512 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.803391 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.816856 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bnkpb"] Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.817270 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-rrrl8" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.849905 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-c8dt7"] Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.854590 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.858523 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8109b11f-0a6a-4894-b7f7-c6d46a62570e-var-run-ovn\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.858565 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8109b11f-0a6a-4894-b7f7-c6d46a62570e-combined-ca-bundle\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.858590 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8109b11f-0a6a-4894-b7f7-c6d46a62570e-scripts\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.858681 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8109b11f-0a6a-4894-b7f7-c6d46a62570e-var-run\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.858741 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8109b11f-0a6a-4894-b7f7-c6d46a62570e-ovn-controller-tls-certs\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.858772 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr7zg\" (UniqueName: \"kubernetes.io/projected/8109b11f-0a6a-4894-b7f7-c6d46a62570e-kube-api-access-pr7zg\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.858819 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8109b11f-0a6a-4894-b7f7-c6d46a62570e-var-log-ovn\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.860672 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-c8dt7"] Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.960038 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv6fz\" (UniqueName: \"kubernetes.io/projected/124744f1-80e9-4fe2-8889-e13e0033ac84-kube-api-access-hv6fz\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.960558 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8109b11f-0a6a-4894-b7f7-c6d46a62570e-combined-ca-bundle\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.960587 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8109b11f-0a6a-4894-b7f7-c6d46a62570e-scripts\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.960620 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8109b11f-0a6a-4894-b7f7-c6d46a62570e-var-run\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.961418 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8109b11f-0a6a-4894-b7f7-c6d46a62570e-ovn-controller-tls-certs\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.961447 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr7zg\" (UniqueName: \"kubernetes.io/projected/8109b11f-0a6a-4894-b7f7-c6d46a62570e-kube-api-access-pr7zg\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.961476 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/124744f1-80e9-4fe2-8889-e13e0033ac84-var-log\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.961509 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8109b11f-0a6a-4894-b7f7-c6d46a62570e-var-log-ovn\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.961582 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/124744f1-80e9-4fe2-8889-e13e0033ac84-etc-ovs\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.961728 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/124744f1-80e9-4fe2-8889-e13e0033ac84-scripts\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.961784 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/124744f1-80e9-4fe2-8889-e13e0033ac84-var-run\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.961805 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/124744f1-80e9-4fe2-8889-e13e0033ac84-var-lib\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.961845 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8109b11f-0a6a-4894-b7f7-c6d46a62570e-var-run-ovn\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.962512 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8109b11f-0a6a-4894-b7f7-c6d46a62570e-var-run\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.962568 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8109b11f-0a6a-4894-b7f7-c6d46a62570e-var-log-ovn\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.963046 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8109b11f-0a6a-4894-b7f7-c6d46a62570e-var-run-ovn\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.964450 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8109b11f-0a6a-4894-b7f7-c6d46a62570e-scripts\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.967506 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8109b11f-0a6a-4894-b7f7-c6d46a62570e-combined-ca-bundle\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.975956 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr7zg\" (UniqueName: \"kubernetes.io/projected/8109b11f-0a6a-4894-b7f7-c6d46a62570e-kube-api-access-pr7zg\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:29 crc kubenswrapper[4981]: I0128 15:20:29.981805 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/8109b11f-0a6a-4894-b7f7-c6d46a62570e-ovn-controller-tls-certs\") pod \"ovn-controller-bnkpb\" (UID: \"8109b11f-0a6a-4894-b7f7-c6d46a62570e\") " pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.063431 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv6fz\" (UniqueName: \"kubernetes.io/projected/124744f1-80e9-4fe2-8889-e13e0033ac84-kube-api-access-hv6fz\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.063893 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/124744f1-80e9-4fe2-8889-e13e0033ac84-var-log\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.063983 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/124744f1-80e9-4fe2-8889-e13e0033ac84-etc-ovs\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.064012 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/124744f1-80e9-4fe2-8889-e13e0033ac84-scripts\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.064047 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/124744f1-80e9-4fe2-8889-e13e0033ac84-var-lib\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.063981 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/124744f1-80e9-4fe2-8889-e13e0033ac84-var-log\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.064062 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/124744f1-80e9-4fe2-8889-e13e0033ac84-var-run\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.064118 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/124744f1-80e9-4fe2-8889-e13e0033ac84-var-run\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.064212 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/124744f1-80e9-4fe2-8889-e13e0033ac84-etc-ovs\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.064249 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/124744f1-80e9-4fe2-8889-e13e0033ac84-var-lib\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.067372 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/124744f1-80e9-4fe2-8889-e13e0033ac84-scripts\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.081862 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv6fz\" (UniqueName: \"kubernetes.io/projected/124744f1-80e9-4fe2-8889-e13e0033ac84-kube-api-access-hv6fz\") pod \"ovn-controller-ovs-c8dt7\" (UID: \"124744f1-80e9-4fe2-8889-e13e0033ac84\") " pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.140177 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.177704 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.660255 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.663213 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.665496 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.665568 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.665684 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.666132 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.670210 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.671473 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-rdg2g" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.774832 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr9f5\" (UniqueName: \"kubernetes.io/projected/0a6c5d9b-a13a-42e8-9d15-f705822bb088-kube-api-access-qr9f5\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.774909 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6c5d9b-a13a-42e8-9d15-f705822bb088-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.774964 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a6c5d9b-a13a-42e8-9d15-f705822bb088-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.775152 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a6c5d9b-a13a-42e8-9d15-f705822bb088-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.775276 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.775314 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a6c5d9b-a13a-42e8-9d15-f705822bb088-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.775367 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0a6c5d9b-a13a-42e8-9d15-f705822bb088-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.775404 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a6c5d9b-a13a-42e8-9d15-f705822bb088-config\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.876991 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6c5d9b-a13a-42e8-9d15-f705822bb088-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.877074 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a6c5d9b-a13a-42e8-9d15-f705822bb088-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.877159 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a6c5d9b-a13a-42e8-9d15-f705822bb088-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.877238 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.877278 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a6c5d9b-a13a-42e8-9d15-f705822bb088-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.877322 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0a6c5d9b-a13a-42e8-9d15-f705822bb088-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.877357 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a6c5d9b-a13a-42e8-9d15-f705822bb088-config\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.877439 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr9f5\" (UniqueName: \"kubernetes.io/projected/0a6c5d9b-a13a-42e8-9d15-f705822bb088-kube-api-access-qr9f5\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.878643 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.878723 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0a6c5d9b-a13a-42e8-9d15-f705822bb088-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.878803 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a6c5d9b-a13a-42e8-9d15-f705822bb088-config\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.879518 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0a6c5d9b-a13a-42e8-9d15-f705822bb088-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.882258 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a6c5d9b-a13a-42e8-9d15-f705822bb088-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.883638 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0a6c5d9b-a13a-42e8-9d15-f705822bb088-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.886305 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a6c5d9b-a13a-42e8-9d15-f705822bb088-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.896509 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr9f5\" (UniqueName: \"kubernetes.io/projected/0a6c5d9b-a13a-42e8-9d15-f705822bb088-kube-api-access-qr9f5\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.909396 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"ovsdbserver-nb-0\" (UID: \"0a6c5d9b-a13a-42e8-9d15-f705822bb088\") " pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:30 crc kubenswrapper[4981]: I0128 15:20:30.989512 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:31 crc kubenswrapper[4981]: I0128 15:20:31.849611 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 15:20:32 crc kubenswrapper[4981]: E0128 15:20:32.474852 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 15:20:32 crc kubenswrapper[4981]: E0128 15:20:32.475334 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98ttc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-f4ss6_openstack(ffafe79d-4b61-4389-8e01-c1b38adf311e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:20:32 crc kubenswrapper[4981]: E0128 15:20:32.476558 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-f4ss6" podUID="ffafe79d-4b61-4389-8e01-c1b38adf311e" Jan 28 15:20:32 crc kubenswrapper[4981]: E0128 15:20:32.607040 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 15:20:32 crc kubenswrapper[4981]: E0128 15:20:32.607614 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ws74,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-x4fdv_openstack(8fef5781-962f-4e98-866b-37c6e503259b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:20:32 crc kubenswrapper[4981]: E0128 15:20:32.608947 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" podUID="8fef5781-962f-4e98-866b-37c6e503259b" Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.683821 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 15:20:32 crc kubenswrapper[4981]: W0128 15:20:32.734867 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaef47cf6_3a65_4f6c_bcd4_68d658d4b1bd.slice/crio-ca7decc5742437bd33a9ac93ad6659f4f21e6f21ab45e1bb4abe20e50ab29bec WatchSource:0}: Error finding container ca7decc5742437bd33a9ac93ad6659f4f21e6f21ab45e1bb4abe20e50ab29bec: Status 404 returned error can't find the container with id ca7decc5742437bd33a9ac93ad6659f4f21e6f21ab45e1bb4abe20e50ab29bec Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.871811 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.876428 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.879702 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.879893 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.879939 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-j6z6m" Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.879903 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.883634 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.892415 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a","Type":"ContainerStarted","Data":"c4027ae29d7ff94439565998db092f29b1482a6d3566ba043e0584437b578b41"} Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.893950 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd","Type":"ContainerStarted","Data":"ca7decc5742437bd33a9ac93ad6659f4f21e6f21ab45e1bb4abe20e50ab29bec"} Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.897277 4981 generic.go:334] "Generic (PLEG): container finished" podID="4e832385-471e-4cf2-a1f5-ecb0aff7b3f5" containerID="5d3dcffe549e0d31b50a101b709500128f4d327d805d8588f13612385857c990" exitCode=0 Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.897742 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" event={"ID":"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5","Type":"ContainerDied","Data":"5d3dcffe549e0d31b50a101b709500128f4d327d805d8588f13612385857c990"} Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.922470 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.922519 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-config\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.922556 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.922596 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.922613 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.922633 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tc7p\" (UniqueName: \"kubernetes.io/projected/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-kube-api-access-9tc7p\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.922649 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:32 crc kubenswrapper[4981]: I0128 15:20:32.922694 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.024643 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.024729 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.025174 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-config\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.025223 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.025259 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.025278 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.025296 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tc7p\" (UniqueName: \"kubernetes.io/projected/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-kube-api-access-9tc7p\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.025314 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.025335 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.025467 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.026039 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-config\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.031414 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.032387 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.032768 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.035327 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.052485 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tc7p\" (UniqueName: \"kubernetes.io/projected/0f9063a3-5cdd-4e55-a714-79db63f3b8b9-kube-api-access-9tc7p\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.052580 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"0f9063a3-5cdd-4e55-a714-79db63f3b8b9\") " pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.095105 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.103818 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.122706 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bnkpb"] Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.195863 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.343604 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f4ss6" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.355964 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.364695 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-c8dt7"] Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.433893 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fef5781-962f-4e98-866b-37c6e503259b-dns-svc\") pod \"8fef5781-962f-4e98-866b-37c6e503259b\" (UID: \"8fef5781-962f-4e98-866b-37c6e503259b\") " Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.434285 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ws74\" (UniqueName: \"kubernetes.io/projected/8fef5781-962f-4e98-866b-37c6e503259b-kube-api-access-7ws74\") pod \"8fef5781-962f-4e98-866b-37c6e503259b\" (UID: \"8fef5781-962f-4e98-866b-37c6e503259b\") " Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.435016 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fef5781-962f-4e98-866b-37c6e503259b-config\") pod \"8fef5781-962f-4e98-866b-37c6e503259b\" (UID: \"8fef5781-962f-4e98-866b-37c6e503259b\") " Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.435060 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffafe79d-4b61-4389-8e01-c1b38adf311e-config\") pod \"ffafe79d-4b61-4389-8e01-c1b38adf311e\" (UID: \"ffafe79d-4b61-4389-8e01-c1b38adf311e\") " Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.435118 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98ttc\" (UniqueName: \"kubernetes.io/projected/ffafe79d-4b61-4389-8e01-c1b38adf311e-kube-api-access-98ttc\") pod \"ffafe79d-4b61-4389-8e01-c1b38adf311e\" (UID: \"ffafe79d-4b61-4389-8e01-c1b38adf311e\") " Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.436088 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fef5781-962f-4e98-866b-37c6e503259b-config" (OuterVolumeSpecName: "config") pod "8fef5781-962f-4e98-866b-37c6e503259b" (UID: "8fef5781-962f-4e98-866b-37c6e503259b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.436536 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffafe79d-4b61-4389-8e01-c1b38adf311e-config" (OuterVolumeSpecName: "config") pod "ffafe79d-4b61-4389-8e01-c1b38adf311e" (UID: "ffafe79d-4b61-4389-8e01-c1b38adf311e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.439485 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fef5781-962f-4e98-866b-37c6e503259b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8fef5781-962f-4e98-866b-37c6e503259b" (UID: "8fef5781-962f-4e98-866b-37c6e503259b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.452509 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fef5781-962f-4e98-866b-37c6e503259b-kube-api-access-7ws74" (OuterVolumeSpecName: "kube-api-access-7ws74") pod "8fef5781-962f-4e98-866b-37c6e503259b" (UID: "8fef5781-962f-4e98-866b-37c6e503259b"). InnerVolumeSpecName "kube-api-access-7ws74". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.452609 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffafe79d-4b61-4389-8e01-c1b38adf311e-kube-api-access-98ttc" (OuterVolumeSpecName: "kube-api-access-98ttc") pod "ffafe79d-4b61-4389-8e01-c1b38adf311e" (UID: "ffafe79d-4b61-4389-8e01-c1b38adf311e"). InnerVolumeSpecName "kube-api-access-98ttc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.498680 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.509545 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fwnkd"] Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.545620 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8fef5781-962f-4e98-866b-37c6e503259b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.545653 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ws74\" (UniqueName: \"kubernetes.io/projected/8fef5781-962f-4e98-866b-37c6e503259b-kube-api-access-7ws74\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.545663 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8fef5781-962f-4e98-866b-37c6e503259b-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.545672 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffafe79d-4b61-4389-8e01-c1b38adf311e-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.545682 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98ttc\" (UniqueName: \"kubernetes.io/projected/ffafe79d-4b61-4389-8e01-c1b38adf311e-kube-api-access-98ttc\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.558449 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.585000 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 15:20:33 crc kubenswrapper[4981]: W0128 15:20:33.606471 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a6c5d9b_a13a_42e8_9d15_f705822bb088.slice/crio-ae43c05772b458b8620fc1ff6873b2fcad9d71abc23a52d6ae23efe92af1260d WatchSource:0}: Error finding container ae43c05772b458b8620fc1ff6873b2fcad9d71abc23a52d6ae23efe92af1260d: Status 404 returned error can't find the container with id ae43c05772b458b8620fc1ff6873b2fcad9d71abc23a52d6ae23efe92af1260d Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.908293 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6456c27c-6d70-453b-a759-b6411aa67f51","Type":"ContainerStarted","Data":"becab4a7d49c9d2b1f2d34e2e1190508103e2d5d863ea9b7f4569176e7700f4d"} Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.918212 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0a6c5d9b-a13a-42e8-9d15-f705822bb088","Type":"ContainerStarted","Data":"ae43c05772b458b8620fc1ff6873b2fcad9d71abc23a52d6ae23efe92af1260d"} Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.923142 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" event={"ID":"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5","Type":"ContainerStarted","Data":"4e22c5ced40c75854037ad2abb1a91f643461541f54ed430f2d80502adea94bd"} Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.923378 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.924353 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5cccad1c-80c8-4806-a093-ecb1ad203f3c","Type":"ContainerStarted","Data":"17af73a516e39e9c7e3d5b8699165ab84c8251b422eca70c14f174d597350be8"} Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.927322 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" event={"ID":"c8c96d9d-6274-4310-a9c3-855a38413dda","Type":"ContainerDied","Data":"0a745bb45950f8a380c9451b07b9868ab46671e1545f01eae1bb83defabfccea"} Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.927220 4981 generic.go:334] "Generic (PLEG): container finished" podID="c8c96d9d-6274-4310-a9c3-855a38413dda" containerID="0a745bb45950f8a380c9451b07b9868ab46671e1545f01eae1bb83defabfccea" exitCode=0 Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.927658 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" event={"ID":"c8c96d9d-6274-4310-a9c3-855a38413dda","Type":"ContainerStarted","Data":"2b87994befb49c036ea076374fc83481d9c1a6d69a877834b59f47e2008ebccd"} Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.928703 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bnkpb" event={"ID":"8109b11f-0a6a-4894-b7f7-c6d46a62570e","Type":"ContainerStarted","Data":"bd926cda8713088dcbce25a30883bca6224f4a422bc3c0e8cbe368b1102df29a"} Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.930286 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-f4ss6" event={"ID":"ffafe79d-4b61-4389-8e01-c1b38adf311e","Type":"ContainerDied","Data":"60b3e45d755fb50f7ccaa7e7d7962149df4ba74f5d44fac0c85a4133d1b93f63"} Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.930328 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-f4ss6" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.931407 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6","Type":"ContainerStarted","Data":"c48129ff284d32fdd1163fe7d42f5f620ffdc16ebc051a6554b708b60ef799e4"} Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.932458 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.932475 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-x4fdv" event={"ID":"8fef5781-962f-4e98-866b-37c6e503259b","Type":"ContainerDied","Data":"6503203be8fd4b9b9d697f88cc24640d9ef4bdc77de12b7f87b4cfde8f1bcaf8"} Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.937230 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c8dt7" event={"ID":"124744f1-80e9-4fe2-8889-e13e0033ac84","Type":"ContainerStarted","Data":"e295100c7f0afeab3460ed9ffa046ec5b6bfd4ebf442b6d9475434dc9a1139e2"} Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.939642 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"756f0fb3-a2dc-4084-bd40-85fa0bf855bd","Type":"ContainerStarted","Data":"04aea29c7fc5b6f1bc58178b3ff3341af23045cb4ed9d21028d3ad3ec684056b"} Jan 28 15:20:33 crc kubenswrapper[4981]: I0128 15:20:33.948387 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" podStartSLOduration=6.115025924 podStartE2EDuration="14.948370587s" podCreationTimestamp="2026-01-28 15:20:19 +0000 UTC" firstStartedPulling="2026-01-28 15:20:23.73868554 +0000 UTC m=+1035.190843771" lastFinishedPulling="2026-01-28 15:20:32.572030193 +0000 UTC m=+1044.024188434" observedRunningTime="2026-01-28 15:20:33.940421746 +0000 UTC m=+1045.392580017" watchObservedRunningTime="2026-01-28 15:20:33.948370587 +0000 UTC m=+1045.400528828" Jan 28 15:20:34 crc kubenswrapper[4981]: I0128 15:20:34.033606 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f4ss6"] Jan 28 15:20:34 crc kubenswrapper[4981]: I0128 15:20:34.040788 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-f4ss6"] Jan 28 15:20:34 crc kubenswrapper[4981]: I0128 15:20:34.053113 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x4fdv"] Jan 28 15:20:34 crc kubenswrapper[4981]: I0128 15:20:34.059871 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x4fdv"] Jan 28 15:20:34 crc kubenswrapper[4981]: I0128 15:20:34.380695 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 15:20:34 crc kubenswrapper[4981]: W0128 15:20:34.522759 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f9063a3_5cdd_4e55_a714_79db63f3b8b9.slice/crio-2a947da01e48d8b4fdca353be6bd5c9b9c5f29f010ad7875309ffc40c6ec5f56 WatchSource:0}: Error finding container 2a947da01e48d8b4fdca353be6bd5c9b9c5f29f010ad7875309ffc40c6ec5f56: Status 404 returned error can't find the container with id 2a947da01e48d8b4fdca353be6bd5c9b9c5f29f010ad7875309ffc40c6ec5f56 Jan 28 15:20:34 crc kubenswrapper[4981]: I0128 15:20:34.950143 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0f9063a3-5cdd-4e55-a714-79db63f3b8b9","Type":"ContainerStarted","Data":"2a947da01e48d8b4fdca353be6bd5c9b9c5f29f010ad7875309ffc40c6ec5f56"} Jan 28 15:20:35 crc kubenswrapper[4981]: I0128 15:20:35.338558 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fef5781-962f-4e98-866b-37c6e503259b" path="/var/lib/kubelet/pods/8fef5781-962f-4e98-866b-37c6e503259b/volumes" Jan 28 15:20:35 crc kubenswrapper[4981]: I0128 15:20:35.339016 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffafe79d-4b61-4389-8e01-c1b38adf311e" path="/var/lib/kubelet/pods/ffafe79d-4b61-4389-8e01-c1b38adf311e/volumes" Jan 28 15:20:37 crc kubenswrapper[4981]: I0128 15:20:37.975266 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" event={"ID":"c8c96d9d-6274-4310-a9c3-855a38413dda","Type":"ContainerStarted","Data":"f38c6bafe550f1e1034f532ea5bafd24b50fc368a5d0bec5780144b00e34310f"} Jan 28 15:20:37 crc kubenswrapper[4981]: I0128 15:20:37.975796 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" Jan 28 15:20:38 crc kubenswrapper[4981]: I0128 15:20:38.002446 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" podStartSLOduration=19.002426435 podStartE2EDuration="19.002426435s" podCreationTimestamp="2026-01-28 15:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:37.997740601 +0000 UTC m=+1049.449898852" watchObservedRunningTime="2026-01-28 15:20:38.002426435 +0000 UTC m=+1049.454584686" Jan 28 15:20:39 crc kubenswrapper[4981]: I0128 15:20:39.485720 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" Jan 28 15:20:43 crc kubenswrapper[4981]: I0128 15:20:43.028843 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd","Type":"ContainerStarted","Data":"f529e876141448994943e6c86ac8482182036825b29cae9f1c0853b4564be7cc"} Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.043731 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0f9063a3-5cdd-4e55-a714-79db63f3b8b9","Type":"ContainerStarted","Data":"555a706318190a4923305fd7b591de07a484ef2cdb02686951ce39c7f542f1c2"} Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.051926 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5cccad1c-80c8-4806-a093-ecb1ad203f3c","Type":"ContainerStarted","Data":"ed3fa028e256ef52d67123bf375679a669443697914c1d8322591cd65286f694"} Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.054301 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"3bab2457-dbba-4fa0-b0c7-0b05a9546bc6","Type":"ContainerStarted","Data":"1bb81138bae6884f97648176ad3a78289f4fe7b25c9a154e85a8588dc8c03e2c"} Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.054420 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.056361 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a","Type":"ContainerStarted","Data":"26f9b661f1186aef4203e83f884eb67884b82372bd398b8d302c6a074e04034c"} Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.059275 4981 generic.go:334] "Generic (PLEG): container finished" podID="124744f1-80e9-4fe2-8889-e13e0033ac84" containerID="70ea9114d60e8e3d18dd2f91f50b91ea10d056fc795c7da396198766390a0ff5" exitCode=0 Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.059406 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c8dt7" event={"ID":"124744f1-80e9-4fe2-8889-e13e0033ac84","Type":"ContainerDied","Data":"70ea9114d60e8e3d18dd2f91f50b91ea10d056fc795c7da396198766390a0ff5"} Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.063011 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bnkpb" event={"ID":"8109b11f-0a6a-4894-b7f7-c6d46a62570e","Type":"ContainerStarted","Data":"846907c0fc210d29e3dc917461be788aca7049f73d6754af3fb0fa82330695a9"} Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.063184 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-bnkpb" Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.065231 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"756f0fb3-a2dc-4084-bd40-85fa0bf855bd","Type":"ContainerStarted","Data":"690dd72fb979939021095f044a3c5d5b6a2df24f6357b3c47a23e68bdf325490"} Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.065335 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.070273 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6456c27c-6d70-453b-a759-b6411aa67f51","Type":"ContainerStarted","Data":"ae54a8260c30b63b6c7115a3e7a119595f296196630adf0f1e2c402962c61321"} Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.072525 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0a6c5d9b-a13a-42e8-9d15-f705822bb088","Type":"ContainerStarted","Data":"96cb01b45397831f6e2a59798b079d03d2d11e3862f0697a91928ff76a80f63a"} Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.146972 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=12.65116477 podStartE2EDuration="21.146950343s" podCreationTimestamp="2026-01-28 15:20:23 +0000 UTC" firstStartedPulling="2026-01-28 15:20:33.600643152 +0000 UTC m=+1045.052801393" lastFinishedPulling="2026-01-28 15:20:42.096428715 +0000 UTC m=+1053.548586966" observedRunningTime="2026-01-28 15:20:44.133912667 +0000 UTC m=+1055.586070948" watchObservedRunningTime="2026-01-28 15:20:44.146950343 +0000 UTC m=+1055.599108584" Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.189019 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=10.55740521 podStartE2EDuration="19.18900261s" podCreationTimestamp="2026-01-28 15:20:25 +0000 UTC" firstStartedPulling="2026-01-28 15:20:33.137789679 +0000 UTC m=+1044.589947920" lastFinishedPulling="2026-01-28 15:20:41.769387049 +0000 UTC m=+1053.221545320" observedRunningTime="2026-01-28 15:20:44.185590919 +0000 UTC m=+1055.637749210" watchObservedRunningTime="2026-01-28 15:20:44.18900261 +0000 UTC m=+1055.641160851" Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.211226 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-bnkpb" podStartSLOduration=6.356573345 podStartE2EDuration="15.211203719s" podCreationTimestamp="2026-01-28 15:20:29 +0000 UTC" firstStartedPulling="2026-01-28 15:20:33.238169555 +0000 UTC m=+1044.690327796" lastFinishedPulling="2026-01-28 15:20:42.092799919 +0000 UTC m=+1053.544958170" observedRunningTime="2026-01-28 15:20:44.203877085 +0000 UTC m=+1055.656035336" watchObservedRunningTime="2026-01-28 15:20:44.211203719 +0000 UTC m=+1055.663361960" Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.758433 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.799038 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bp7bz"] Jan 28 15:20:44 crc kubenswrapper[4981]: I0128 15:20:44.799293 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" podUID="4e832385-471e-4cf2-a1f5-ecb0aff7b3f5" containerName="dnsmasq-dns" containerID="cri-o://4e22c5ced40c75854037ad2abb1a91f643461541f54ed430f2d80502adea94bd" gracePeriod=10 Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.085049 4981 generic.go:334] "Generic (PLEG): container finished" podID="4e832385-471e-4cf2-a1f5-ecb0aff7b3f5" containerID="4e22c5ced40c75854037ad2abb1a91f643461541f54ed430f2d80502adea94bd" exitCode=0 Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.085140 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" event={"ID":"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5","Type":"ContainerDied","Data":"4e22c5ced40c75854037ad2abb1a91f643461541f54ed430f2d80502adea94bd"} Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.093080 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c8dt7" event={"ID":"124744f1-80e9-4fe2-8889-e13e0033ac84","Type":"ContainerStarted","Data":"35d6fa88a2cf700465004ed8280c1f4519a4cea0523dedcbd315324639b2d7e0"} Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.093111 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-c8dt7" event={"ID":"124744f1-80e9-4fe2-8889-e13e0033ac84","Type":"ContainerStarted","Data":"b97e3c2ecd9f4cb35c3b0daa0666d03aad22858071f090fe5520d36d1986e90f"} Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.179008 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.179060 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.294397 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.310916 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-c8dt7" podStartSLOduration=7.912828828 podStartE2EDuration="16.310900396s" podCreationTimestamp="2026-01-28 15:20:29 +0000 UTC" firstStartedPulling="2026-01-28 15:20:33.371123296 +0000 UTC m=+1044.823281537" lastFinishedPulling="2026-01-28 15:20:41.769194864 +0000 UTC m=+1053.221353105" observedRunningTime="2026-01-28 15:20:45.113268587 +0000 UTC m=+1056.565426828" watchObservedRunningTime="2026-01-28 15:20:45.310900396 +0000 UTC m=+1056.763058637" Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.440924 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45xlh\" (UniqueName: \"kubernetes.io/projected/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-kube-api-access-45xlh\") pod \"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5\" (UID: \"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5\") " Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.441032 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-dns-svc\") pod \"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5\" (UID: \"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5\") " Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.441093 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-config\") pod \"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5\" (UID: \"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5\") " Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.446675 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-kube-api-access-45xlh" (OuterVolumeSpecName: "kube-api-access-45xlh") pod "4e832385-471e-4cf2-a1f5-ecb0aff7b3f5" (UID: "4e832385-471e-4cf2-a1f5-ecb0aff7b3f5"). InnerVolumeSpecName "kube-api-access-45xlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.481177 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e832385-471e-4cf2-a1f5-ecb0aff7b3f5" (UID: "4e832385-471e-4cf2-a1f5-ecb0aff7b3f5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.489851 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-config" (OuterVolumeSpecName: "config") pod "4e832385-471e-4cf2-a1f5-ecb0aff7b3f5" (UID: "4e832385-471e-4cf2-a1f5-ecb0aff7b3f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.543373 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.543419 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:45 crc kubenswrapper[4981]: I0128 15:20:45.543435 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45xlh\" (UniqueName: \"kubernetes.io/projected/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5-kube-api-access-45xlh\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:46 crc kubenswrapper[4981]: I0128 15:20:46.108714 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" Jan 28 15:20:46 crc kubenswrapper[4981]: I0128 15:20:46.108717 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-bp7bz" event={"ID":"4e832385-471e-4cf2-a1f5-ecb0aff7b3f5","Type":"ContainerDied","Data":"09e511056b8f2fd02d8d71dca480851293b8e689e48c717f28f243afe13771bd"} Jan 28 15:20:46 crc kubenswrapper[4981]: I0128 15:20:46.109308 4981 scope.go:117] "RemoveContainer" containerID="4e22c5ced40c75854037ad2abb1a91f643461541f54ed430f2d80502adea94bd" Jan 28 15:20:46 crc kubenswrapper[4981]: I0128 15:20:46.155536 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bp7bz"] Jan 28 15:20:46 crc kubenswrapper[4981]: I0128 15:20:46.164478 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-bp7bz"] Jan 28 15:20:46 crc kubenswrapper[4981]: I0128 15:20:46.932904 4981 scope.go:117] "RemoveContainer" containerID="5d3dcffe549e0d31b50a101b709500128f4d327d805d8588f13612385857c990" Jan 28 15:20:47 crc kubenswrapper[4981]: I0128 15:20:47.121822 4981 generic.go:334] "Generic (PLEG): container finished" podID="aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd" containerID="f529e876141448994943e6c86ac8482182036825b29cae9f1c0853b4564be7cc" exitCode=0 Jan 28 15:20:47 crc kubenswrapper[4981]: I0128 15:20:47.121922 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd","Type":"ContainerDied","Data":"f529e876141448994943e6c86ac8482182036825b29cae9f1c0853b4564be7cc"} Jan 28 15:20:47 crc kubenswrapper[4981]: I0128 15:20:47.343389 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e832385-471e-4cf2-a1f5-ecb0aff7b3f5" path="/var/lib/kubelet/pods/4e832385-471e-4cf2-a1f5-ecb0aff7b3f5/volumes" Jan 28 15:20:48 crc kubenswrapper[4981]: I0128 15:20:48.140300 4981 generic.go:334] "Generic (PLEG): container finished" podID="ee506ff0-7634-45eb-ac9f-5d5de1b3c40a" containerID="26f9b661f1186aef4203e83f884eb67884b82372bd398b8d302c6a074e04034c" exitCode=0 Jan 28 15:20:48 crc kubenswrapper[4981]: I0128 15:20:48.140421 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a","Type":"ContainerDied","Data":"26f9b661f1186aef4203e83f884eb67884b82372bd398b8d302c6a074e04034c"} Jan 28 15:20:48 crc kubenswrapper[4981]: I0128 15:20:48.144054 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd","Type":"ContainerStarted","Data":"a1e00d31fb263aae2fce18b214be8d18ed1a559299196bc2674614374df409e8"} Jan 28 15:20:48 crc kubenswrapper[4981]: I0128 15:20:48.149171 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"0f9063a3-5cdd-4e55-a714-79db63f3b8b9","Type":"ContainerStarted","Data":"c7cd00fd7ed909ffd7f791bde2363e03449c27f1bdcf8e7649dcc5c0a5ab206e"} Jan 28 15:20:48 crc kubenswrapper[4981]: I0128 15:20:48.158263 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"0a6c5d9b-a13a-42e8-9d15-f705822bb088","Type":"ContainerStarted","Data":"5b17948ce43d9a3781242a252a39305ff68bc880223d7c4755442747f7c6973c"} Jan 28 15:20:48 crc kubenswrapper[4981]: I0128 15:20:48.196463 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:48 crc kubenswrapper[4981]: I0128 15:20:48.196507 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:48 crc kubenswrapper[4981]: I0128 15:20:48.210163 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=16.858106631 podStartE2EDuration="26.210145825s" podCreationTimestamp="2026-01-28 15:20:22 +0000 UTC" firstStartedPulling="2026-01-28 15:20:32.740542339 +0000 UTC m=+1044.192700580" lastFinishedPulling="2026-01-28 15:20:42.092581493 +0000 UTC m=+1053.544739774" observedRunningTime="2026-01-28 15:20:48.207809763 +0000 UTC m=+1059.659968044" watchObservedRunningTime="2026-01-28 15:20:48.210145825 +0000 UTC m=+1059.662304076" Jan 28 15:20:48 crc kubenswrapper[4981]: I0128 15:20:48.247962 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=5.580624159 podStartE2EDuration="19.247942539s" podCreationTimestamp="2026-01-28 15:20:29 +0000 UTC" firstStartedPulling="2026-01-28 15:20:33.608842569 +0000 UTC m=+1045.061000810" lastFinishedPulling="2026-01-28 15:20:47.276160939 +0000 UTC m=+1058.728319190" observedRunningTime="2026-01-28 15:20:48.239174546 +0000 UTC m=+1059.691332807" watchObservedRunningTime="2026-01-28 15:20:48.247942539 +0000 UTC m=+1059.700100800" Jan 28 15:20:48 crc kubenswrapper[4981]: I0128 15:20:48.280689 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=4.5525811019999995 podStartE2EDuration="17.280670338s" podCreationTimestamp="2026-01-28 15:20:31 +0000 UTC" firstStartedPulling="2026-01-28 15:20:34.531955885 +0000 UTC m=+1045.984114116" lastFinishedPulling="2026-01-28 15:20:47.260045121 +0000 UTC m=+1058.712203352" observedRunningTime="2026-01-28 15:20:48.27886374 +0000 UTC m=+1059.731022041" watchObservedRunningTime="2026-01-28 15:20:48.280670338 +0000 UTC m=+1059.732828569" Jan 28 15:20:48 crc kubenswrapper[4981]: I0128 15:20:48.288338 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:48 crc kubenswrapper[4981]: I0128 15:20:48.659044 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.180484 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.196736 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"ee506ff0-7634-45eb-ac9f-5d5de1b3c40a","Type":"ContainerStarted","Data":"26dfc6cd487738576ed2d131877e76dfafbe4325d089a3b0125108b1b6576e39"} Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.227618 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=19.572978977 podStartE2EDuration="29.227406282s" podCreationTimestamp="2026-01-28 15:20:20 +0000 UTC" firstStartedPulling="2026-01-28 15:20:32.496502547 +0000 UTC m=+1043.948660788" lastFinishedPulling="2026-01-28 15:20:42.150929852 +0000 UTC m=+1053.603088093" observedRunningTime="2026-01-28 15:20:49.219129772 +0000 UTC m=+1060.671288093" watchObservedRunningTime="2026-01-28 15:20:49.227406282 +0000 UTC m=+1060.679564563" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.249462 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.253574 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.506614 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-dv8cb"] Jan 28 15:20:49 crc kubenswrapper[4981]: E0128 15:20:49.506935 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e832385-471e-4cf2-a1f5-ecb0aff7b3f5" containerName="dnsmasq-dns" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.506952 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e832385-471e-4cf2-a1f5-ecb0aff7b3f5" containerName="dnsmasq-dns" Jan 28 15:20:49 crc kubenswrapper[4981]: E0128 15:20:49.506962 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e832385-471e-4cf2-a1f5-ecb0aff7b3f5" containerName="init" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.506968 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e832385-471e-4cf2-a1f5-ecb0aff7b3f5" containerName="init" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.507149 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e832385-471e-4cf2-a1f5-ecb0aff7b3f5" containerName="dnsmasq-dns" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.508076 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.509499 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.518970 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-dv8cb"] Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.589781 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndg46\" (UniqueName: \"kubernetes.io/projected/9a714c8e-c2b1-4498-902b-7726da6457e7-kube-api-access-ndg46\") pod \"dnsmasq-dns-7f896c8c65-dv8cb\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.589820 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-config\") pod \"dnsmasq-dns-7f896c8c65-dv8cb\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.589857 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-dv8cb\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.589874 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-dv8cb\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.690785 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndg46\" (UniqueName: \"kubernetes.io/projected/9a714c8e-c2b1-4498-902b-7726da6457e7-kube-api-access-ndg46\") pod \"dnsmasq-dns-7f896c8c65-dv8cb\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.691025 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-config\") pod \"dnsmasq-dns-7f896c8c65-dv8cb\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.691062 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-dv8cb\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.691079 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-dv8cb\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.692460 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-dv8cb\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.692789 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-config\") pod \"dnsmasq-dns-7f896c8c65-dv8cb\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.692867 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-dv8cb\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.707976 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-cbsgf"] Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.709158 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.712249 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.726283 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-cbsgf"] Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.734614 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndg46\" (UniqueName: \"kubernetes.io/projected/9a714c8e-c2b1-4498-902b-7726da6457e7-kube-api-access-ndg46\") pod \"dnsmasq-dns-7f896c8c65-dv8cb\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.792896 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/510c52de-24af-4fe2-833d-0990283aa110-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.792965 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/510c52de-24af-4fe2-833d-0990283aa110-ovn-rundir\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.793020 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510c52de-24af-4fe2-833d-0990283aa110-combined-ca-bundle\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.793049 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd2fp\" (UniqueName: \"kubernetes.io/projected/510c52de-24af-4fe2-833d-0990283aa110-kube-api-access-wd2fp\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.793071 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/510c52de-24af-4fe2-833d-0990283aa110-ovs-rundir\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.793087 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/510c52de-24af-4fe2-833d-0990283aa110-config\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.825302 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.894070 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/510c52de-24af-4fe2-833d-0990283aa110-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.894157 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/510c52de-24af-4fe2-833d-0990283aa110-ovn-rundir\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.894246 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510c52de-24af-4fe2-833d-0990283aa110-combined-ca-bundle\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.894286 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd2fp\" (UniqueName: \"kubernetes.io/projected/510c52de-24af-4fe2-833d-0990283aa110-kube-api-access-wd2fp\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.894322 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/510c52de-24af-4fe2-833d-0990283aa110-ovs-rundir\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.894341 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/510c52de-24af-4fe2-833d-0990283aa110-config\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.894610 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/510c52de-24af-4fe2-833d-0990283aa110-ovn-rundir\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.894707 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/510c52de-24af-4fe2-833d-0990283aa110-ovs-rundir\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.895531 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/510c52de-24af-4fe2-833d-0990283aa110-config\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.902296 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.902373 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.902433 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.903112 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"176dd31ff4b98ab75c0fb5c532e4cb21dde081ab7085a97e6c5485cd5bc31437"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.903177 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://176dd31ff4b98ab75c0fb5c532e4cb21dde081ab7085a97e6c5485cd5bc31437" gracePeriod=600 Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.909745 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/510c52de-24af-4fe2-833d-0990283aa110-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.911779 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510c52de-24af-4fe2-833d-0990283aa110-combined-ca-bundle\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:49 crc kubenswrapper[4981]: I0128 15:20:49.927757 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd2fp\" (UniqueName: \"kubernetes.io/projected/510c52de-24af-4fe2-833d-0990283aa110-kube-api-access-wd2fp\") pod \"ovn-controller-metrics-cbsgf\" (UID: \"510c52de-24af-4fe2-833d-0990283aa110\") " pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.075341 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-cbsgf" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.155407 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-dv8cb"] Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.179858 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8jgqq"] Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.181071 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.199635 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8jgqq"] Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.199751 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.220371 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.265934 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.303034 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-8jgqq\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.303237 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-8jgqq\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.304156 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-8jgqq\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.304201 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4dm4\" (UniqueName: \"kubernetes.io/projected/aaf45d34-4e3d-4397-93d1-3627de06bfe0-kube-api-access-f4dm4\") pod \"dnsmasq-dns-86db49b7ff-8jgqq\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.304567 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-config\") pod \"dnsmasq-dns-86db49b7ff-8jgqq\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.405898 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-8jgqq\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.406556 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-8jgqq\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.406583 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4dm4\" (UniqueName: \"kubernetes.io/projected/aaf45d34-4e3d-4397-93d1-3627de06bfe0-kube-api-access-f4dm4\") pod \"dnsmasq-dns-86db49b7ff-8jgqq\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.406646 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-config\") pod \"dnsmasq-dns-86db49b7ff-8jgqq\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.406685 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-8jgqq\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.406808 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-8jgqq\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.407657 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-8jgqq\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.407665 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-8jgqq\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.408836 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-config\") pod \"dnsmasq-dns-86db49b7ff-8jgqq\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.422984 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-dv8cb"] Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.423496 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4dm4\" (UniqueName: \"kubernetes.io/projected/aaf45d34-4e3d-4397-93d1-3627de06bfe0-kube-api-access-f4dm4\") pod \"dnsmasq-dns-86db49b7ff-8jgqq\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: W0128 15:20:50.429627 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a714c8e_c2b1_4498_902b_7726da6457e7.slice/crio-bfe5cd5de2dc6c29c9fb7f4b479f9251fb251e970fbe0fe9a874852c22070abe WatchSource:0}: Error finding container bfe5cd5de2dc6c29c9fb7f4b479f9251fb251e970fbe0fe9a874852c22070abe: Status 404 returned error can't find the container with id bfe5cd5de2dc6c29c9fb7f4b479f9251fb251e970fbe0fe9a874852c22070abe Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.518361 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.550766 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.552162 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.554804 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.559401 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-h2fx6" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.559566 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.560485 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.566636 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.614234 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-cbsgf"] Jan 28 15:20:50 crc kubenswrapper[4981]: W0128 15:20:50.620324 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod510c52de_24af_4fe2_833d_0990283aa110.slice/crio-8c63eca4f02bbaf7a5031918107d421557495a5d15dad083697c807be3e67f41 WatchSource:0}: Error finding container 8c63eca4f02bbaf7a5031918107d421557495a5d15dad083697c807be3e67f41: Status 404 returned error can't find the container with id 8c63eca4f02bbaf7a5031918107d421557495a5d15dad083697c807be3e67f41 Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.711004 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71b20415-1b79-4236-89db-42f2787cc2c2-scripts\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.711061 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b20415-1b79-4236-89db-42f2787cc2c2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.711092 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6bcw\" (UniqueName: \"kubernetes.io/projected/71b20415-1b79-4236-89db-42f2787cc2c2-kube-api-access-q6bcw\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.711120 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b20415-1b79-4236-89db-42f2787cc2c2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.711159 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/71b20415-1b79-4236-89db-42f2787cc2c2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.711224 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71b20415-1b79-4236-89db-42f2787cc2c2-config\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.711238 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b20415-1b79-4236-89db-42f2787cc2c2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.812758 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71b20415-1b79-4236-89db-42f2787cc2c2-config\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.813104 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b20415-1b79-4236-89db-42f2787cc2c2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.813154 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71b20415-1b79-4236-89db-42f2787cc2c2-scripts\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.813214 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b20415-1b79-4236-89db-42f2787cc2c2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.813325 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6bcw\" (UniqueName: \"kubernetes.io/projected/71b20415-1b79-4236-89db-42f2787cc2c2-kube-api-access-q6bcw\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.813368 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b20415-1b79-4236-89db-42f2787cc2c2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.813424 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/71b20415-1b79-4236-89db-42f2787cc2c2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.813877 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71b20415-1b79-4236-89db-42f2787cc2c2-config\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.816086 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/71b20415-1b79-4236-89db-42f2787cc2c2-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.817738 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71b20415-1b79-4236-89db-42f2787cc2c2-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.819477 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/71b20415-1b79-4236-89db-42f2787cc2c2-scripts\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.825045 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b20415-1b79-4236-89db-42f2787cc2c2-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.831421 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/71b20415-1b79-4236-89db-42f2787cc2c2-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.832102 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6bcw\" (UniqueName: \"kubernetes.io/projected/71b20415-1b79-4236-89db-42f2787cc2c2-kube-api-access-q6bcw\") pod \"ovn-northd-0\" (UID: \"71b20415-1b79-4236-89db-42f2787cc2c2\") " pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.900887 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 15:20:50 crc kubenswrapper[4981]: I0128 15:20:50.977438 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8jgqq"] Jan 28 15:20:50 crc kubenswrapper[4981]: W0128 15:20:50.980329 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaaf45d34_4e3d_4397_93d1_3627de06bfe0.slice/crio-b21a8ed29a81abf154f63d7341e095ec78da1c6b70ee4ae4a4910497c3fb47a3 WatchSource:0}: Error finding container b21a8ed29a81abf154f63d7341e095ec78da1c6b70ee4ae4a4910497c3fb47a3: Status 404 returned error can't find the container with id b21a8ed29a81abf154f63d7341e095ec78da1c6b70ee4ae4a4910497c3fb47a3 Jan 28 15:20:51 crc kubenswrapper[4981]: I0128 15:20:51.226124 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" event={"ID":"aaf45d34-4e3d-4397-93d1-3627de06bfe0","Type":"ContainerStarted","Data":"b21a8ed29a81abf154f63d7341e095ec78da1c6b70ee4ae4a4910497c3fb47a3"} Jan 28 15:20:51 crc kubenswrapper[4981]: I0128 15:20:51.229539 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="176dd31ff4b98ab75c0fb5c532e4cb21dde081ab7085a97e6c5485cd5bc31437" exitCode=0 Jan 28 15:20:51 crc kubenswrapper[4981]: I0128 15:20:51.229581 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"176dd31ff4b98ab75c0fb5c532e4cb21dde081ab7085a97e6c5485cd5bc31437"} Jan 28 15:20:51 crc kubenswrapper[4981]: I0128 15:20:51.229637 4981 scope.go:117] "RemoveContainer" containerID="c69a7071dbf3ec3f1115d8a9515e0de8b513ecd90cb4130db9534e4ea3ba8dac" Jan 28 15:20:51 crc kubenswrapper[4981]: I0128 15:20:51.231716 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" event={"ID":"9a714c8e-c2b1-4498-902b-7726da6457e7","Type":"ContainerStarted","Data":"bfe5cd5de2dc6c29c9fb7f4b479f9251fb251e970fbe0fe9a874852c22070abe"} Jan 28 15:20:51 crc kubenswrapper[4981]: I0128 15:20:51.233149 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-cbsgf" event={"ID":"510c52de-24af-4fe2-833d-0990283aa110","Type":"ContainerStarted","Data":"8c63eca4f02bbaf7a5031918107d421557495a5d15dad083697c807be3e67f41"} Jan 28 15:20:51 crc kubenswrapper[4981]: I0128 15:20:51.358806 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 15:20:51 crc kubenswrapper[4981]: W0128 15:20:51.370459 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71b20415_1b79_4236_89db_42f2787cc2c2.slice/crio-d0739ab816267309a8766345d13dc47d791ae5732c49fc616763bd316d4f25de WatchSource:0}: Error finding container d0739ab816267309a8766345d13dc47d791ae5732c49fc616763bd316d4f25de: Status 404 returned error can't find the container with id d0739ab816267309a8766345d13dc47d791ae5732c49fc616763bd316d4f25de Jan 28 15:20:52 crc kubenswrapper[4981]: I0128 15:20:52.154734 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 15:20:52 crc kubenswrapper[4981]: I0128 15:20:52.155045 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 15:20:52 crc kubenswrapper[4981]: I0128 15:20:52.239368 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"71b20415-1b79-4236-89db-42f2787cc2c2","Type":"ContainerStarted","Data":"d0739ab816267309a8766345d13dc47d791ae5732c49fc616763bd316d4f25de"} Jan 28 15:20:53 crc kubenswrapper[4981]: I0128 15:20:53.522381 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:53 crc kubenswrapper[4981]: I0128 15:20:53.522701 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:55 crc kubenswrapper[4981]: I0128 15:20:55.898260 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 15:20:55 crc kubenswrapper[4981]: I0128 15:20:55.916624 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8jgqq"] Jan 28 15:20:55 crc kubenswrapper[4981]: I0128 15:20:55.969574 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-n4hbg"] Jan 28 15:20:55 crc kubenswrapper[4981]: I0128 15:20:55.990430 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:55 crc kubenswrapper[4981]: I0128 15:20:55.992949 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-n4hbg"] Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.117328 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvm4z\" (UniqueName: \"kubernetes.io/projected/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-kube-api-access-jvm4z\") pod \"dnsmasq-dns-698758b865-n4hbg\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.117393 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-n4hbg\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.117425 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-dns-svc\") pod \"dnsmasq-dns-698758b865-n4hbg\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.117481 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-config\") pod \"dnsmasq-dns-698758b865-n4hbg\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.117532 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-n4hbg\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.219129 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-config\") pod \"dnsmasq-dns-698758b865-n4hbg\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.219206 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-n4hbg\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.219269 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvm4z\" (UniqueName: \"kubernetes.io/projected/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-kube-api-access-jvm4z\") pod \"dnsmasq-dns-698758b865-n4hbg\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.219292 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-n4hbg\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.219313 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-dns-svc\") pod \"dnsmasq-dns-698758b865-n4hbg\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.220104 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-dns-svc\") pod \"dnsmasq-dns-698758b865-n4hbg\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.220659 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-config\") pod \"dnsmasq-dns-698758b865-n4hbg\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.221175 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-n4hbg\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.221894 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-n4hbg\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.240164 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvm4z\" (UniqueName: \"kubernetes.io/projected/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-kube-api-access-jvm4z\") pod \"dnsmasq-dns-698758b865-n4hbg\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.307026 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-cbsgf" event={"ID":"510c52de-24af-4fe2-833d-0990283aa110","Type":"ContainerStarted","Data":"56f8331a20d4d2987214f32b291a7b761dc45e6ca85dabcd963b94e9468dc669"} Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.308879 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" event={"ID":"aaf45d34-4e3d-4397-93d1-3627de06bfe0","Type":"ContainerStarted","Data":"8893cd2c1183e30ac6a54d3facad4ea74c99171421fc5a9e55a024e2608853f5"} Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.316291 4981 generic.go:334] "Generic (PLEG): container finished" podID="9a714c8e-c2b1-4498-902b-7726da6457e7" containerID="1cb9fab1e1bb04c3fdb14b42fe38eb931449919257290774867493a532896f37" exitCode=0 Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.316334 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" event={"ID":"9a714c8e-c2b1-4498-902b-7726da6457e7","Type":"ContainerDied","Data":"1cb9fab1e1bb04c3fdb14b42fe38eb931449919257290774867493a532896f37"} Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.328678 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-cbsgf" podStartSLOduration=7.328657029 podStartE2EDuration="7.328657029s" podCreationTimestamp="2026-01-28 15:20:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:56.323914863 +0000 UTC m=+1067.776073124" watchObservedRunningTime="2026-01-28 15:20:56.328657029 +0000 UTC m=+1067.780815270" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.415698 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:56 crc kubenswrapper[4981]: E0128 15:20:56.621818 4981 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.151:57762->38.102.83.151:33457: read tcp 38.102.83.151:57762->38.102.83.151:33457: read: connection reset by peer Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.875248 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.903547 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.938437 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-ovsdbserver-nb\") pod \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.938488 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-config\") pod \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.938547 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-dns-svc\") pod \"9a714c8e-c2b1-4498-902b-7726da6457e7\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.938575 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4dm4\" (UniqueName: \"kubernetes.io/projected/aaf45d34-4e3d-4397-93d1-3627de06bfe0-kube-api-access-f4dm4\") pod \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.938590 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndg46\" (UniqueName: \"kubernetes.io/projected/9a714c8e-c2b1-4498-902b-7726da6457e7-kube-api-access-ndg46\") pod \"9a714c8e-c2b1-4498-902b-7726da6457e7\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.938606 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-config\") pod \"9a714c8e-c2b1-4498-902b-7726da6457e7\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.938672 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-dns-svc\") pod \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.938697 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-ovsdbserver-sb\") pod \"9a714c8e-c2b1-4498-902b-7726da6457e7\" (UID: \"9a714c8e-c2b1-4498-902b-7726da6457e7\") " Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.938730 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-ovsdbserver-sb\") pod \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\" (UID: \"aaf45d34-4e3d-4397-93d1-3627de06bfe0\") " Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.950539 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaf45d34-4e3d-4397-93d1-3627de06bfe0-kube-api-access-f4dm4" (OuterVolumeSpecName: "kube-api-access-f4dm4") pod "aaf45d34-4e3d-4397-93d1-3627de06bfe0" (UID: "aaf45d34-4e3d-4397-93d1-3627de06bfe0"). InnerVolumeSpecName "kube-api-access-f4dm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.956541 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a714c8e-c2b1-4498-902b-7726da6457e7-kube-api-access-ndg46" (OuterVolumeSpecName: "kube-api-access-ndg46") pod "9a714c8e-c2b1-4498-902b-7726da6457e7" (UID: "9a714c8e-c2b1-4498-902b-7726da6457e7"). InnerVolumeSpecName "kube-api-access-ndg46". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.962080 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9a714c8e-c2b1-4498-902b-7726da6457e7" (UID: "9a714c8e-c2b1-4498-902b-7726da6457e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.964322 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aaf45d34-4e3d-4397-93d1-3627de06bfe0" (UID: "aaf45d34-4e3d-4397-93d1-3627de06bfe0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.973623 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-config" (OuterVolumeSpecName: "config") pod "aaf45d34-4e3d-4397-93d1-3627de06bfe0" (UID: "aaf45d34-4e3d-4397-93d1-3627de06bfe0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.976440 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aaf45d34-4e3d-4397-93d1-3627de06bfe0" (UID: "aaf45d34-4e3d-4397-93d1-3627de06bfe0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:20:56 crc kubenswrapper[4981]: I0128 15:20:56.980535 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9a714c8e-c2b1-4498-902b-7726da6457e7" (UID: "9a714c8e-c2b1-4498-902b-7726da6457e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:56.982461 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-config" (OuterVolumeSpecName: "config") pod "9a714c8e-c2b1-4498-902b-7726da6457e7" (UID: "9a714c8e-c2b1-4498-902b-7726da6457e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:56.986920 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aaf45d34-4e3d-4397-93d1-3627de06bfe0" (UID: "aaf45d34-4e3d-4397-93d1-3627de06bfe0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.018516 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 28 15:20:57 crc kubenswrapper[4981]: E0128 15:20:57.018889 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaf45d34-4e3d-4397-93d1-3627de06bfe0" containerName="init" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.018905 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaf45d34-4e3d-4397-93d1-3627de06bfe0" containerName="init" Jan 28 15:20:57 crc kubenswrapper[4981]: E0128 15:20:57.018914 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a714c8e-c2b1-4498-902b-7726da6457e7" containerName="init" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.018920 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a714c8e-c2b1-4498-902b-7726da6457e7" containerName="init" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.019076 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaf45d34-4e3d-4397-93d1-3627de06bfe0" containerName="init" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.019093 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a714c8e-c2b1-4498-902b-7726da6457e7" containerName="init" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.042462 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.042495 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndg46\" (UniqueName: \"kubernetes.io/projected/9a714c8e-c2b1-4498-902b-7726da6457e7-kube-api-access-ndg46\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.042509 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4dm4\" (UniqueName: \"kubernetes.io/projected/aaf45d34-4e3d-4397-93d1-3627de06bfe0-kube-api-access-f4dm4\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.042517 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.042527 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.042536 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9a714c8e-c2b1-4498-902b-7726da6457e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.042544 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.042552 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.042560 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aaf45d34-4e3d-4397-93d1-3627de06bfe0-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.046967 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.047106 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.051569 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-tpxh4" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.051700 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.051862 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.052009 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.115749 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-n4hbg"] Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.143981 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a3c5f4dc-185e-4293-9853-f16cde7997fa-lock\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.144024 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a3c5f4dc-185e-4293-9853-f16cde7997fa-cache\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.144066 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7prcp\" (UniqueName: \"kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-kube-api-access-7prcp\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.144421 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.144468 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.144623 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c5f4dc-185e-4293-9853-f16cde7997fa-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.247135 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.247204 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.247269 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c5f4dc-185e-4293-9853-f16cde7997fa-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.247333 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a3c5f4dc-185e-4293-9853-f16cde7997fa-lock\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.247356 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a3c5f4dc-185e-4293-9853-f16cde7997fa-cache\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.247385 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7prcp\" (UniqueName: \"kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-kube-api-access-7prcp\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.248043 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: E0128 15:20:57.253898 4981 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:20:57 crc kubenswrapper[4981]: E0128 15:20:57.253933 4981 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:20:57 crc kubenswrapper[4981]: E0128 15:20:57.253997 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift podName:a3c5f4dc-185e-4293-9853-f16cde7997fa nodeName:}" failed. No retries permitted until 2026-01-28 15:20:57.753979164 +0000 UTC m=+1069.206137395 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift") pod "swift-storage-0" (UID: "a3c5f4dc-185e-4293-9853-f16cde7997fa") : configmap "swift-ring-files" not found Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.254588 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/a3c5f4dc-185e-4293-9853-f16cde7997fa-lock\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.254837 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a3c5f4dc-185e-4293-9853-f16cde7997fa-cache\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.262439 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3c5f4dc-185e-4293-9853-f16cde7997fa-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.267402 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7prcp\" (UniqueName: \"kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-kube-api-access-7prcp\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.276206 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.347988 4981 generic.go:334] "Generic (PLEG): container finished" podID="aaf45d34-4e3d-4397-93d1-3627de06bfe0" containerID="8893cd2c1183e30ac6a54d3facad4ea74c99171421fc5a9e55a024e2608853f5" exitCode=0 Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.348043 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" event={"ID":"aaf45d34-4e3d-4397-93d1-3627de06bfe0","Type":"ContainerDied","Data":"8893cd2c1183e30ac6a54d3facad4ea74c99171421fc5a9e55a024e2608853f5"} Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.348051 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.348066 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8jgqq" event={"ID":"aaf45d34-4e3d-4397-93d1-3627de06bfe0","Type":"ContainerDied","Data":"b21a8ed29a81abf154f63d7341e095ec78da1c6b70ee4ae4a4910497c3fb47a3"} Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.348082 4981 scope.go:117] "RemoveContainer" containerID="8893cd2c1183e30ac6a54d3facad4ea74c99171421fc5a9e55a024e2608853f5" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.361573 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"af8ca17674da28747e2478538c5afcfef139a1d418b13a8e190cf49cebcd62c0"} Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.370587 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.371017 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-dv8cb" event={"ID":"9a714c8e-c2b1-4498-902b-7726da6457e7","Type":"ContainerDied","Data":"bfe5cd5de2dc6c29c9fb7f4b479f9251fb251e970fbe0fe9a874852c22070abe"} Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.425844 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8jgqq"] Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.438240 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8jgqq"] Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.456648 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-dv8cb"] Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.482729 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-dv8cb"] Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.586689 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-zs4xh"] Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.587612 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.596533 4981 scope.go:117] "RemoveContainer" containerID="8893cd2c1183e30ac6a54d3facad4ea74c99171421fc5a9e55a024e2608853f5" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.596651 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.596677 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.597218 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 28 15:20:57 crc kubenswrapper[4981]: E0128 15:20:57.597342 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8893cd2c1183e30ac6a54d3facad4ea74c99171421fc5a9e55a024e2608853f5\": container with ID starting with 8893cd2c1183e30ac6a54d3facad4ea74c99171421fc5a9e55a024e2608853f5 not found: ID does not exist" containerID="8893cd2c1183e30ac6a54d3facad4ea74c99171421fc5a9e55a024e2608853f5" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.597375 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8893cd2c1183e30ac6a54d3facad4ea74c99171421fc5a9e55a024e2608853f5"} err="failed to get container status \"8893cd2c1183e30ac6a54d3facad4ea74c99171421fc5a9e55a024e2608853f5\": rpc error: code = NotFound desc = could not find container \"8893cd2c1183e30ac6a54d3facad4ea74c99171421fc5a9e55a024e2608853f5\": container with ID starting with 8893cd2c1183e30ac6a54d3facad4ea74c99171421fc5a9e55a024e2608853f5 not found: ID does not exist" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.597403 4981 scope.go:117] "RemoveContainer" containerID="1cb9fab1e1bb04c3fdb14b42fe38eb931449919257290774867493a532896f37" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.605011 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zs4xh"] Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.657587 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-combined-ca-bundle\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.657658 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a59295bc-49fa-4b41-b2a1-3c19c27292e5-ring-data-devices\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.657702 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phqf8\" (UniqueName: \"kubernetes.io/projected/a59295bc-49fa-4b41-b2a1-3c19c27292e5-kube-api-access-phqf8\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.657980 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a59295bc-49fa-4b41-b2a1-3c19c27292e5-scripts\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.658103 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a59295bc-49fa-4b41-b2a1-3c19c27292e5-etc-swift\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.658152 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-dispersionconf\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.658223 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-swiftconf\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.761596 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-combined-ca-bundle\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.761650 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a59295bc-49fa-4b41-b2a1-3c19c27292e5-ring-data-devices\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.761673 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phqf8\" (UniqueName: \"kubernetes.io/projected/a59295bc-49fa-4b41-b2a1-3c19c27292e5-kube-api-access-phqf8\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.761718 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.761750 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a59295bc-49fa-4b41-b2a1-3c19c27292e5-scripts\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.761784 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a59295bc-49fa-4b41-b2a1-3c19c27292e5-etc-swift\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.761804 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-dispersionconf\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.761826 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-swiftconf\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: E0128 15:20:57.762127 4981 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:20:57 crc kubenswrapper[4981]: E0128 15:20:57.762155 4981 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:20:57 crc kubenswrapper[4981]: E0128 15:20:57.762221 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift podName:a3c5f4dc-185e-4293-9853-f16cde7997fa nodeName:}" failed. No retries permitted until 2026-01-28 15:20:58.762192211 +0000 UTC m=+1070.214350452 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift") pod "swift-storage-0" (UID: "a3c5f4dc-185e-4293-9853-f16cde7997fa") : configmap "swift-ring-files" not found Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.762435 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a59295bc-49fa-4b41-b2a1-3c19c27292e5-etc-swift\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.763160 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a59295bc-49fa-4b41-b2a1-3c19c27292e5-scripts\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.763670 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a59295bc-49fa-4b41-b2a1-3c19c27292e5-ring-data-devices\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.808648 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-swiftconf\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.808715 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-dispersionconf\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.811315 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-combined-ca-bundle\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.812885 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phqf8\" (UniqueName: \"kubernetes.io/projected/a59295bc-49fa-4b41-b2a1-3c19c27292e5-kube-api-access-phqf8\") pod \"swift-ring-rebalance-zs4xh\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:57 crc kubenswrapper[4981]: I0128 15:20:57.945769 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:20:58 crc kubenswrapper[4981]: I0128 15:20:58.411875 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"71b20415-1b79-4236-89db-42f2787cc2c2","Type":"ContainerStarted","Data":"9d21cbd33faa2d6969de4ec878ed92832020e34313670fc670286dd8a302a147"} Jan 28 15:20:58 crc kubenswrapper[4981]: I0128 15:20:58.412199 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 28 15:20:58 crc kubenswrapper[4981]: I0128 15:20:58.412213 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"71b20415-1b79-4236-89db-42f2787cc2c2","Type":"ContainerStarted","Data":"eb1810feb527f7716fa02206100ba439e3f4a45323d20bd3cbcbdc2dcf9709ba"} Jan 28 15:20:58 crc kubenswrapper[4981]: I0128 15:20:58.417158 4981 generic.go:334] "Generic (PLEG): container finished" podID="c4d13dbf-51e4-47d3-8fa9-2abb69b3b270" containerID="f108debd7f7c6960822eb34ed1519740ad8e96f93a15145c017b97b0537ab391" exitCode=0 Jan 28 15:20:58 crc kubenswrapper[4981]: I0128 15:20:58.417229 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-n4hbg" event={"ID":"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270","Type":"ContainerDied","Data":"f108debd7f7c6960822eb34ed1519740ad8e96f93a15145c017b97b0537ab391"} Jan 28 15:20:58 crc kubenswrapper[4981]: I0128 15:20:58.417264 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-n4hbg" event={"ID":"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270","Type":"ContainerStarted","Data":"5dcf39f8968ed448de0715dd2e9da530a4c782fb4bd340735d8e87f1c4a499bd"} Jan 28 15:20:58 crc kubenswrapper[4981]: I0128 15:20:58.430811 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.147555416 podStartE2EDuration="8.430795208s" podCreationTimestamp="2026-01-28 15:20:50 +0000 UTC" firstStartedPulling="2026-01-28 15:20:51.377771822 +0000 UTC m=+1062.829930063" lastFinishedPulling="2026-01-28 15:20:57.661011614 +0000 UTC m=+1069.113169855" observedRunningTime="2026-01-28 15:20:58.43050578 +0000 UTC m=+1069.882664021" watchObservedRunningTime="2026-01-28 15:20:58.430795208 +0000 UTC m=+1069.882953449" Jan 28 15:20:58 crc kubenswrapper[4981]: I0128 15:20:58.693378 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-zs4xh"] Jan 28 15:20:58 crc kubenswrapper[4981]: W0128 15:20:58.696686 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda59295bc_49fa_4b41_b2a1_3c19c27292e5.slice/crio-8f927628342d0635c8e85affa0e7885ef1ef90c838ec8d032244ec7bc6e4fd7b WatchSource:0}: Error finding container 8f927628342d0635c8e85affa0e7885ef1ef90c838ec8d032244ec7bc6e4fd7b: Status 404 returned error can't find the container with id 8f927628342d0635c8e85affa0e7885ef1ef90c838ec8d032244ec7bc6e4fd7b Jan 28 15:20:58 crc kubenswrapper[4981]: I0128 15:20:58.789216 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:20:58 crc kubenswrapper[4981]: E0128 15:20:58.789537 4981 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:20:58 crc kubenswrapper[4981]: E0128 15:20:58.789572 4981 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:20:58 crc kubenswrapper[4981]: E0128 15:20:58.789632 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift podName:a3c5f4dc-185e-4293-9853-f16cde7997fa nodeName:}" failed. No retries permitted until 2026-01-28 15:21:00.789614098 +0000 UTC m=+1072.241772329 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift") pod "swift-storage-0" (UID: "a3c5f4dc-185e-4293-9853-f16cde7997fa") : configmap "swift-ring-files" not found Jan 28 15:20:59 crc kubenswrapper[4981]: I0128 15:20:59.344862 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a714c8e-c2b1-4498-902b-7726da6457e7" path="/var/lib/kubelet/pods/9a714c8e-c2b1-4498-902b-7726da6457e7/volumes" Jan 28 15:20:59 crc kubenswrapper[4981]: I0128 15:20:59.346043 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaf45d34-4e3d-4397-93d1-3627de06bfe0" path="/var/lib/kubelet/pods/aaf45d34-4e3d-4397-93d1-3627de06bfe0/volumes" Jan 28 15:20:59 crc kubenswrapper[4981]: I0128 15:20:59.425371 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zs4xh" event={"ID":"a59295bc-49fa-4b41-b2a1-3c19c27292e5","Type":"ContainerStarted","Data":"8f927628342d0635c8e85affa0e7885ef1ef90c838ec8d032244ec7bc6e4fd7b"} Jan 28 15:20:59 crc kubenswrapper[4981]: I0128 15:20:59.427457 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-n4hbg" event={"ID":"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270","Type":"ContainerStarted","Data":"1a9d68e47df47aca977d37e16e7a8384eebe4debcf6ef16547b4e97cbfb54064"} Jan 28 15:20:59 crc kubenswrapper[4981]: I0128 15:20:59.427803 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:20:59 crc kubenswrapper[4981]: I0128 15:20:59.450470 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-n4hbg" podStartSLOduration=4.4504449489999995 podStartE2EDuration="4.450444949s" podCreationTimestamp="2026-01-28 15:20:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:20:59.44559361 +0000 UTC m=+1070.897751871" watchObservedRunningTime="2026-01-28 15:20:59.450444949 +0000 UTC m=+1070.902603210" Jan 28 15:20:59 crc kubenswrapper[4981]: I0128 15:20:59.537620 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 28 15:20:59 crc kubenswrapper[4981]: I0128 15:20:59.625496 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 28 15:21:00 crc kubenswrapper[4981]: I0128 15:21:00.829585 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:21:00 crc kubenswrapper[4981]: E0128 15:21:00.829845 4981 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:21:00 crc kubenswrapper[4981]: E0128 15:21:00.830013 4981 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:21:00 crc kubenswrapper[4981]: E0128 15:21:00.830070 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift podName:a3c5f4dc-185e-4293-9853-f16cde7997fa nodeName:}" failed. No retries permitted until 2026-01-28 15:21:04.830052949 +0000 UTC m=+1076.282211190 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift") pod "swift-storage-0" (UID: "a3c5f4dc-185e-4293-9853-f16cde7997fa") : configmap "swift-ring-files" not found Jan 28 15:21:02 crc kubenswrapper[4981]: I0128 15:21:02.287057 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-d22jd"] Jan 28 15:21:02 crc kubenswrapper[4981]: I0128 15:21:02.288515 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d22jd" Jan 28 15:21:02 crc kubenswrapper[4981]: I0128 15:21:02.290854 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 15:21:02 crc kubenswrapper[4981]: I0128 15:21:02.301352 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d22jd"] Jan 28 15:21:02 crc kubenswrapper[4981]: I0128 15:21:02.334375 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 28 15:21:02 crc kubenswrapper[4981]: I0128 15:21:02.366573 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7971af1-53aa-4b9f-a6a0-179dc46d6519-operator-scripts\") pod \"root-account-create-update-d22jd\" (UID: \"d7971af1-53aa-4b9f-a6a0-179dc46d6519\") " pod="openstack/root-account-create-update-d22jd" Jan 28 15:21:02 crc kubenswrapper[4981]: I0128 15:21:02.366642 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2fsk\" (UniqueName: \"kubernetes.io/projected/d7971af1-53aa-4b9f-a6a0-179dc46d6519-kube-api-access-l2fsk\") pod \"root-account-create-update-d22jd\" (UID: \"d7971af1-53aa-4b9f-a6a0-179dc46d6519\") " pod="openstack/root-account-create-update-d22jd" Jan 28 15:21:02 crc kubenswrapper[4981]: I0128 15:21:02.429553 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 28 15:21:02 crc kubenswrapper[4981]: I0128 15:21:02.451636 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zs4xh" event={"ID":"a59295bc-49fa-4b41-b2a1-3c19c27292e5","Type":"ContainerStarted","Data":"d946ab64400dc81e4f7bc6a25b3466f28778baa5e6d6a4b604cf92ba140fe171"} Jan 28 15:21:02 crc kubenswrapper[4981]: I0128 15:21:02.468536 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7971af1-53aa-4b9f-a6a0-179dc46d6519-operator-scripts\") pod \"root-account-create-update-d22jd\" (UID: \"d7971af1-53aa-4b9f-a6a0-179dc46d6519\") " pod="openstack/root-account-create-update-d22jd" Jan 28 15:21:02 crc kubenswrapper[4981]: I0128 15:21:02.468875 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2fsk\" (UniqueName: \"kubernetes.io/projected/d7971af1-53aa-4b9f-a6a0-179dc46d6519-kube-api-access-l2fsk\") pod \"root-account-create-update-d22jd\" (UID: \"d7971af1-53aa-4b9f-a6a0-179dc46d6519\") " pod="openstack/root-account-create-update-d22jd" Jan 28 15:21:02 crc kubenswrapper[4981]: I0128 15:21:02.469643 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7971af1-53aa-4b9f-a6a0-179dc46d6519-operator-scripts\") pod \"root-account-create-update-d22jd\" (UID: \"d7971af1-53aa-4b9f-a6a0-179dc46d6519\") " pod="openstack/root-account-create-update-d22jd" Jan 28 15:21:02 crc kubenswrapper[4981]: I0128 15:21:02.475919 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-zs4xh" podStartSLOduration=2.366902671 podStartE2EDuration="5.47590468s" podCreationTimestamp="2026-01-28 15:20:57 +0000 UTC" firstStartedPulling="2026-01-28 15:20:58.698978101 +0000 UTC m=+1070.151136342" lastFinishedPulling="2026-01-28 15:21:01.80798007 +0000 UTC m=+1073.260138351" observedRunningTime="2026-01-28 15:21:02.469116639 +0000 UTC m=+1073.921274870" watchObservedRunningTime="2026-01-28 15:21:02.47590468 +0000 UTC m=+1073.928062941" Jan 28 15:21:02 crc kubenswrapper[4981]: I0128 15:21:02.493900 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2fsk\" (UniqueName: \"kubernetes.io/projected/d7971af1-53aa-4b9f-a6a0-179dc46d6519-kube-api-access-l2fsk\") pod \"root-account-create-update-d22jd\" (UID: \"d7971af1-53aa-4b9f-a6a0-179dc46d6519\") " pod="openstack/root-account-create-update-d22jd" Jan 28 15:21:02 crc kubenswrapper[4981]: I0128 15:21:02.603528 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d22jd" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.041993 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-d22jd"] Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.293319 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-r2xkn"] Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.294588 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r2xkn" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.311183 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-r2xkn"] Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.390322 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p44fh\" (UniqueName: \"kubernetes.io/projected/8912af28-37cb-4f57-b318-9e3724b13213-kube-api-access-p44fh\") pod \"keystone-db-create-r2xkn\" (UID: \"8912af28-37cb-4f57-b318-9e3724b13213\") " pod="openstack/keystone-db-create-r2xkn" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.390401 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8912af28-37cb-4f57-b318-9e3724b13213-operator-scripts\") pod \"keystone-db-create-r2xkn\" (UID: \"8912af28-37cb-4f57-b318-9e3724b13213\") " pod="openstack/keystone-db-create-r2xkn" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.394959 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c8be-account-create-update-8gx8g"] Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.395952 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c8be-account-create-update-8gx8g" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.398724 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.412858 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c8be-account-create-update-8gx8g"] Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.459699 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d22jd" event={"ID":"d7971af1-53aa-4b9f-a6a0-179dc46d6519","Type":"ContainerStarted","Data":"6878b04f7d9757c06a27119a4d86587d52d67b084b4c2e7b7f9a4332f1fd6c91"} Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.459754 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d22jd" event={"ID":"d7971af1-53aa-4b9f-a6a0-179dc46d6519","Type":"ContainerStarted","Data":"f275f8b959dcdf3c09f8a223c0fa622423372ed6e226a17daedb68508336855d"} Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.482819 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-d22jd" podStartSLOduration=1.482788951 podStartE2EDuration="1.482788951s" podCreationTimestamp="2026-01-28 15:21:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:03.475552269 +0000 UTC m=+1074.927710550" watchObservedRunningTime="2026-01-28 15:21:03.482788951 +0000 UTC m=+1074.934947242" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.492419 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p44fh\" (UniqueName: \"kubernetes.io/projected/8912af28-37cb-4f57-b318-9e3724b13213-kube-api-access-p44fh\") pod \"keystone-db-create-r2xkn\" (UID: \"8912af28-37cb-4f57-b318-9e3724b13213\") " pod="openstack/keystone-db-create-r2xkn" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.492487 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztc2k\" (UniqueName: \"kubernetes.io/projected/0329c13d-bd93-45a8-82a3-b990aa22da35-kube-api-access-ztc2k\") pod \"keystone-c8be-account-create-update-8gx8g\" (UID: \"0329c13d-bd93-45a8-82a3-b990aa22da35\") " pod="openstack/keystone-c8be-account-create-update-8gx8g" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.492511 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8912af28-37cb-4f57-b318-9e3724b13213-operator-scripts\") pod \"keystone-db-create-r2xkn\" (UID: \"8912af28-37cb-4f57-b318-9e3724b13213\") " pod="openstack/keystone-db-create-r2xkn" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.492575 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0329c13d-bd93-45a8-82a3-b990aa22da35-operator-scripts\") pod \"keystone-c8be-account-create-update-8gx8g\" (UID: \"0329c13d-bd93-45a8-82a3-b990aa22da35\") " pod="openstack/keystone-c8be-account-create-update-8gx8g" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.494651 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8912af28-37cb-4f57-b318-9e3724b13213-operator-scripts\") pod \"keystone-db-create-r2xkn\" (UID: \"8912af28-37cb-4f57-b318-9e3724b13213\") " pod="openstack/keystone-db-create-r2xkn" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.516623 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p44fh\" (UniqueName: \"kubernetes.io/projected/8912af28-37cb-4f57-b318-9e3724b13213-kube-api-access-p44fh\") pod \"keystone-db-create-r2xkn\" (UID: \"8912af28-37cb-4f57-b318-9e3724b13213\") " pod="openstack/keystone-db-create-r2xkn" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.588609 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-vxd4q"] Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.589779 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vxd4q" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.593902 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztc2k\" (UniqueName: \"kubernetes.io/projected/0329c13d-bd93-45a8-82a3-b990aa22da35-kube-api-access-ztc2k\") pod \"keystone-c8be-account-create-update-8gx8g\" (UID: \"0329c13d-bd93-45a8-82a3-b990aa22da35\") " pod="openstack/keystone-c8be-account-create-update-8gx8g" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.594026 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0329c13d-bd93-45a8-82a3-b990aa22da35-operator-scripts\") pod \"keystone-c8be-account-create-update-8gx8g\" (UID: \"0329c13d-bd93-45a8-82a3-b990aa22da35\") " pod="openstack/keystone-c8be-account-create-update-8gx8g" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.594891 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0329c13d-bd93-45a8-82a3-b990aa22da35-operator-scripts\") pod \"keystone-c8be-account-create-update-8gx8g\" (UID: \"0329c13d-bd93-45a8-82a3-b990aa22da35\") " pod="openstack/keystone-c8be-account-create-update-8gx8g" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.609531 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vxd4q"] Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.610026 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r2xkn" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.638319 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztc2k\" (UniqueName: \"kubernetes.io/projected/0329c13d-bd93-45a8-82a3-b990aa22da35-kube-api-access-ztc2k\") pod \"keystone-c8be-account-create-update-8gx8g\" (UID: \"0329c13d-bd93-45a8-82a3-b990aa22da35\") " pod="openstack/keystone-c8be-account-create-update-8gx8g" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.697194 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cqch\" (UniqueName: \"kubernetes.io/projected/c6fd85fb-5a63-4f04-8c99-c03167e5e4a9-kube-api-access-6cqch\") pod \"placement-db-create-vxd4q\" (UID: \"c6fd85fb-5a63-4f04-8c99-c03167e5e4a9\") " pod="openstack/placement-db-create-vxd4q" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.697302 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6fd85fb-5a63-4f04-8c99-c03167e5e4a9-operator-scripts\") pod \"placement-db-create-vxd4q\" (UID: \"c6fd85fb-5a63-4f04-8c99-c03167e5e4a9\") " pod="openstack/placement-db-create-vxd4q" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.703415 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-893e-account-create-update-tjqh9"] Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.704769 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-893e-account-create-update-tjqh9" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.707692 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.709823 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c8be-account-create-update-8gx8g" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.728431 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-893e-account-create-update-tjqh9"] Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.798400 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cqch\" (UniqueName: \"kubernetes.io/projected/c6fd85fb-5a63-4f04-8c99-c03167e5e4a9-kube-api-access-6cqch\") pod \"placement-db-create-vxd4q\" (UID: \"c6fd85fb-5a63-4f04-8c99-c03167e5e4a9\") " pod="openstack/placement-db-create-vxd4q" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.798762 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6fd85fb-5a63-4f04-8c99-c03167e5e4a9-operator-scripts\") pod \"placement-db-create-vxd4q\" (UID: \"c6fd85fb-5a63-4f04-8c99-c03167e5e4a9\") " pod="openstack/placement-db-create-vxd4q" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.799743 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6fd85fb-5a63-4f04-8c99-c03167e5e4a9-operator-scripts\") pod \"placement-db-create-vxd4q\" (UID: \"c6fd85fb-5a63-4f04-8c99-c03167e5e4a9\") " pod="openstack/placement-db-create-vxd4q" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.820015 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cqch\" (UniqueName: \"kubernetes.io/projected/c6fd85fb-5a63-4f04-8c99-c03167e5e4a9-kube-api-access-6cqch\") pod \"placement-db-create-vxd4q\" (UID: \"c6fd85fb-5a63-4f04-8c99-c03167e5e4a9\") " pod="openstack/placement-db-create-vxd4q" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.899946 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c557k\" (UniqueName: \"kubernetes.io/projected/cea78484-cb73-4b6c-bf1f-36e44fcb7cf4-kube-api-access-c557k\") pod \"placement-893e-account-create-update-tjqh9\" (UID: \"cea78484-cb73-4b6c-bf1f-36e44fcb7cf4\") " pod="openstack/placement-893e-account-create-update-tjqh9" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.900009 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea78484-cb73-4b6c-bf1f-36e44fcb7cf4-operator-scripts\") pod \"placement-893e-account-create-update-tjqh9\" (UID: \"cea78484-cb73-4b6c-bf1f-36e44fcb7cf4\") " pod="openstack/placement-893e-account-create-update-tjqh9" Jan 28 15:21:03 crc kubenswrapper[4981]: I0128 15:21:03.922175 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vxd4q" Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.004091 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea78484-cb73-4b6c-bf1f-36e44fcb7cf4-operator-scripts\") pod \"placement-893e-account-create-update-tjqh9\" (UID: \"cea78484-cb73-4b6c-bf1f-36e44fcb7cf4\") " pod="openstack/placement-893e-account-create-update-tjqh9" Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.004563 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c557k\" (UniqueName: \"kubernetes.io/projected/cea78484-cb73-4b6c-bf1f-36e44fcb7cf4-kube-api-access-c557k\") pod \"placement-893e-account-create-update-tjqh9\" (UID: \"cea78484-cb73-4b6c-bf1f-36e44fcb7cf4\") " pod="openstack/placement-893e-account-create-update-tjqh9" Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.005373 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea78484-cb73-4b6c-bf1f-36e44fcb7cf4-operator-scripts\") pod \"placement-893e-account-create-update-tjqh9\" (UID: \"cea78484-cb73-4b6c-bf1f-36e44fcb7cf4\") " pod="openstack/placement-893e-account-create-update-tjqh9" Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.030783 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c557k\" (UniqueName: \"kubernetes.io/projected/cea78484-cb73-4b6c-bf1f-36e44fcb7cf4-kube-api-access-c557k\") pod \"placement-893e-account-create-update-tjqh9\" (UID: \"cea78484-cb73-4b6c-bf1f-36e44fcb7cf4\") " pod="openstack/placement-893e-account-create-update-tjqh9" Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.081968 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-r2xkn"] Jan 28 15:21:04 crc kubenswrapper[4981]: W0128 15:21:04.090751 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8912af28_37cb_4f57_b318_9e3724b13213.slice/crio-d13a4a6d9cf0130fbf05f7a7c3770b0e7d0167bb97ae489c609a1e76b14e5ae7 WatchSource:0}: Error finding container d13a4a6d9cf0130fbf05f7a7c3770b0e7d0167bb97ae489c609a1e76b14e5ae7: Status 404 returned error can't find the container with id d13a4a6d9cf0130fbf05f7a7c3770b0e7d0167bb97ae489c609a1e76b14e5ae7 Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.176232 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c8be-account-create-update-8gx8g"] Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.326870 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-893e-account-create-update-tjqh9" Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.357632 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-vxd4q"] Jan 28 15:21:04 crc kubenswrapper[4981]: W0128 15:21:04.369625 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6fd85fb_5a63_4f04_8c99_c03167e5e4a9.slice/crio-7fc52cea7d6ce0fcc9241916771c956da62079696b3a20695486f6f309c8cd48 WatchSource:0}: Error finding container 7fc52cea7d6ce0fcc9241916771c956da62079696b3a20695486f6f309c8cd48: Status 404 returned error can't find the container with id 7fc52cea7d6ce0fcc9241916771c956da62079696b3a20695486f6f309c8cd48 Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.469610 4981 generic.go:334] "Generic (PLEG): container finished" podID="d7971af1-53aa-4b9f-a6a0-179dc46d6519" containerID="6878b04f7d9757c06a27119a4d86587d52d67b084b4c2e7b7f9a4332f1fd6c91" exitCode=0 Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.469691 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d22jd" event={"ID":"d7971af1-53aa-4b9f-a6a0-179dc46d6519","Type":"ContainerDied","Data":"6878b04f7d9757c06a27119a4d86587d52d67b084b4c2e7b7f9a4332f1fd6c91"} Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.473981 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r2xkn" event={"ID":"8912af28-37cb-4f57-b318-9e3724b13213","Type":"ContainerStarted","Data":"60e1f8071aafadaa0edf6f9e60bad757dcd9ac844596ea221be2f289a300ff94"} Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.474031 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r2xkn" event={"ID":"8912af28-37cb-4f57-b318-9e3724b13213","Type":"ContainerStarted","Data":"d13a4a6d9cf0130fbf05f7a7c3770b0e7d0167bb97ae489c609a1e76b14e5ae7"} Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.481167 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vxd4q" event={"ID":"c6fd85fb-5a63-4f04-8c99-c03167e5e4a9","Type":"ContainerStarted","Data":"7fc52cea7d6ce0fcc9241916771c956da62079696b3a20695486f6f309c8cd48"} Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.486721 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c8be-account-create-update-8gx8g" event={"ID":"0329c13d-bd93-45a8-82a3-b990aa22da35","Type":"ContainerStarted","Data":"b5fd1c0a06956b34d63d1144a2bdfebee439cc02aba1c6c1a997689785770a8f"} Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.486766 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c8be-account-create-update-8gx8g" event={"ID":"0329c13d-bd93-45a8-82a3-b990aa22da35","Type":"ContainerStarted","Data":"4cf5f7048cf8a3e85013d2ef746efd1cb1a0fbbbb9bab64bb9ada842cc03dd87"} Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.517001 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-c8be-account-create-update-8gx8g" podStartSLOduration=1.516983288 podStartE2EDuration="1.516983288s" podCreationTimestamp="2026-01-28 15:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:04.508254326 +0000 UTC m=+1075.960412567" watchObservedRunningTime="2026-01-28 15:21:04.516983288 +0000 UTC m=+1075.969141529" Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.532621 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-r2xkn" podStartSLOduration=1.532606163 podStartE2EDuration="1.532606163s" podCreationTimestamp="2026-01-28 15:21:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:04.52686891 +0000 UTC m=+1075.979027151" watchObservedRunningTime="2026-01-28 15:21:04.532606163 +0000 UTC m=+1075.984764404" Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.803620 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-893e-account-create-update-tjqh9"] Jan 28 15:21:04 crc kubenswrapper[4981]: W0128 15:21:04.807269 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcea78484_cb73_4b6c_bf1f_36e44fcb7cf4.slice/crio-bb43877cc28fc96fbef21778202234c909c4db2bf2b922214193c1a2c70417c1 WatchSource:0}: Error finding container bb43877cc28fc96fbef21778202234c909c4db2bf2b922214193c1a2c70417c1: Status 404 returned error can't find the container with id bb43877cc28fc96fbef21778202234c909c4db2bf2b922214193c1a2c70417c1 Jan 28 15:21:04 crc kubenswrapper[4981]: I0128 15:21:04.924800 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:21:04 crc kubenswrapper[4981]: E0128 15:21:04.924919 4981 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 15:21:04 crc kubenswrapper[4981]: E0128 15:21:04.925428 4981 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 15:21:04 crc kubenswrapper[4981]: E0128 15:21:04.925524 4981 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift podName:a3c5f4dc-185e-4293-9853-f16cde7997fa nodeName:}" failed. No retries permitted until 2026-01-28 15:21:12.925499876 +0000 UTC m=+1084.377658127 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift") pod "swift-storage-0" (UID: "a3c5f4dc-185e-4293-9853-f16cde7997fa") : configmap "swift-ring-files" not found Jan 28 15:21:05 crc kubenswrapper[4981]: I0128 15:21:05.504341 4981 generic.go:334] "Generic (PLEG): container finished" podID="8912af28-37cb-4f57-b318-9e3724b13213" containerID="60e1f8071aafadaa0edf6f9e60bad757dcd9ac844596ea221be2f289a300ff94" exitCode=0 Jan 28 15:21:05 crc kubenswrapper[4981]: I0128 15:21:05.504460 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r2xkn" event={"ID":"8912af28-37cb-4f57-b318-9e3724b13213","Type":"ContainerDied","Data":"60e1f8071aafadaa0edf6f9e60bad757dcd9ac844596ea221be2f289a300ff94"} Jan 28 15:21:05 crc kubenswrapper[4981]: I0128 15:21:05.508149 4981 generic.go:334] "Generic (PLEG): container finished" podID="c6fd85fb-5a63-4f04-8c99-c03167e5e4a9" containerID="675f7f4a4d21f1f0b9da7a0d4057ea65b90881e3c1c41cf359ddfee8480ae1c1" exitCode=0 Jan 28 15:21:05 crc kubenswrapper[4981]: I0128 15:21:05.508296 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vxd4q" event={"ID":"c6fd85fb-5a63-4f04-8c99-c03167e5e4a9","Type":"ContainerDied","Data":"675f7f4a4d21f1f0b9da7a0d4057ea65b90881e3c1c41cf359ddfee8480ae1c1"} Jan 28 15:21:05 crc kubenswrapper[4981]: I0128 15:21:05.514070 4981 generic.go:334] "Generic (PLEG): container finished" podID="0329c13d-bd93-45a8-82a3-b990aa22da35" containerID="b5fd1c0a06956b34d63d1144a2bdfebee439cc02aba1c6c1a997689785770a8f" exitCode=0 Jan 28 15:21:05 crc kubenswrapper[4981]: I0128 15:21:05.514127 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c8be-account-create-update-8gx8g" event={"ID":"0329c13d-bd93-45a8-82a3-b990aa22da35","Type":"ContainerDied","Data":"b5fd1c0a06956b34d63d1144a2bdfebee439cc02aba1c6c1a997689785770a8f"} Jan 28 15:21:05 crc kubenswrapper[4981]: I0128 15:21:05.527696 4981 generic.go:334] "Generic (PLEG): container finished" podID="cea78484-cb73-4b6c-bf1f-36e44fcb7cf4" containerID="5f6228c1dede5551735def8d5ecedadaab7001c13664fe6dc26cbf12d0dc2b14" exitCode=0 Jan 28 15:21:05 crc kubenswrapper[4981]: I0128 15:21:05.527752 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-893e-account-create-update-tjqh9" event={"ID":"cea78484-cb73-4b6c-bf1f-36e44fcb7cf4","Type":"ContainerDied","Data":"5f6228c1dede5551735def8d5ecedadaab7001c13664fe6dc26cbf12d0dc2b14"} Jan 28 15:21:05 crc kubenswrapper[4981]: I0128 15:21:05.527796 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-893e-account-create-update-tjqh9" event={"ID":"cea78484-cb73-4b6c-bf1f-36e44fcb7cf4","Type":"ContainerStarted","Data":"bb43877cc28fc96fbef21778202234c909c4db2bf2b922214193c1a2c70417c1"} Jan 28 15:21:05 crc kubenswrapper[4981]: I0128 15:21:05.860526 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d22jd" Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.048703 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7971af1-53aa-4b9f-a6a0-179dc46d6519-operator-scripts\") pod \"d7971af1-53aa-4b9f-a6a0-179dc46d6519\" (UID: \"d7971af1-53aa-4b9f-a6a0-179dc46d6519\") " Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.049086 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2fsk\" (UniqueName: \"kubernetes.io/projected/d7971af1-53aa-4b9f-a6a0-179dc46d6519-kube-api-access-l2fsk\") pod \"d7971af1-53aa-4b9f-a6a0-179dc46d6519\" (UID: \"d7971af1-53aa-4b9f-a6a0-179dc46d6519\") " Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.049824 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7971af1-53aa-4b9f-a6a0-179dc46d6519-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7971af1-53aa-4b9f-a6a0-179dc46d6519" (UID: "d7971af1-53aa-4b9f-a6a0-179dc46d6519"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.057643 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7971af1-53aa-4b9f-a6a0-179dc46d6519-kube-api-access-l2fsk" (OuterVolumeSpecName: "kube-api-access-l2fsk") pod "d7971af1-53aa-4b9f-a6a0-179dc46d6519" (UID: "d7971af1-53aa-4b9f-a6a0-179dc46d6519"). InnerVolumeSpecName "kube-api-access-l2fsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.151891 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7971af1-53aa-4b9f-a6a0-179dc46d6519-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.151946 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2fsk\" (UniqueName: \"kubernetes.io/projected/d7971af1-53aa-4b9f-a6a0-179dc46d6519-kube-api-access-l2fsk\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.417541 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.524384 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fwnkd"] Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.524616 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" podUID="c8c96d9d-6274-4310-a9c3-855a38413dda" containerName="dnsmasq-dns" containerID="cri-o://f38c6bafe550f1e1034f532ea5bafd24b50fc368a5d0bec5780144b00e34310f" gracePeriod=10 Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.543749 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-d22jd" event={"ID":"d7971af1-53aa-4b9f-a6a0-179dc46d6519","Type":"ContainerDied","Data":"f275f8b959dcdf3c09f8a223c0fa622423372ed6e226a17daedb68508336855d"} Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.543787 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f275f8b959dcdf3c09f8a223c0fa622423372ed6e226a17daedb68508336855d" Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.543921 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-d22jd" Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.864648 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r2xkn" Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.967466 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8912af28-37cb-4f57-b318-9e3724b13213-operator-scripts\") pod \"8912af28-37cb-4f57-b318-9e3724b13213\" (UID: \"8912af28-37cb-4f57-b318-9e3724b13213\") " Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.967598 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p44fh\" (UniqueName: \"kubernetes.io/projected/8912af28-37cb-4f57-b318-9e3724b13213-kube-api-access-p44fh\") pod \"8912af28-37cb-4f57-b318-9e3724b13213\" (UID: \"8912af28-37cb-4f57-b318-9e3724b13213\") " Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.969411 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8912af28-37cb-4f57-b318-9e3724b13213-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8912af28-37cb-4f57-b318-9e3724b13213" (UID: "8912af28-37cb-4f57-b318-9e3724b13213"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:06 crc kubenswrapper[4981]: I0128 15:21:06.990638 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8912af28-37cb-4f57-b318-9e3724b13213-kube-api-access-p44fh" (OuterVolumeSpecName: "kube-api-access-p44fh") pod "8912af28-37cb-4f57-b318-9e3724b13213" (UID: "8912af28-37cb-4f57-b318-9e3724b13213"). InnerVolumeSpecName "kube-api-access-p44fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.069372 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p44fh\" (UniqueName: \"kubernetes.io/projected/8912af28-37cb-4f57-b318-9e3724b13213-kube-api-access-p44fh\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.069398 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8912af28-37cb-4f57-b318-9e3724b13213-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.247211 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.258018 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vxd4q" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.259737 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-893e-account-create-update-tjqh9" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.265886 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c8be-account-create-update-8gx8g" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.373484 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvs97\" (UniqueName: \"kubernetes.io/projected/c8c96d9d-6274-4310-a9c3-855a38413dda-kube-api-access-nvs97\") pod \"c8c96d9d-6274-4310-a9c3-855a38413dda\" (UID: \"c8c96d9d-6274-4310-a9c3-855a38413dda\") " Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.373559 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea78484-cb73-4b6c-bf1f-36e44fcb7cf4-operator-scripts\") pod \"cea78484-cb73-4b6c-bf1f-36e44fcb7cf4\" (UID: \"cea78484-cb73-4b6c-bf1f-36e44fcb7cf4\") " Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.373583 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c557k\" (UniqueName: \"kubernetes.io/projected/cea78484-cb73-4b6c-bf1f-36e44fcb7cf4-kube-api-access-c557k\") pod \"cea78484-cb73-4b6c-bf1f-36e44fcb7cf4\" (UID: \"cea78484-cb73-4b6c-bf1f-36e44fcb7cf4\") " Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.373631 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8c96d9d-6274-4310-a9c3-855a38413dda-dns-svc\") pod \"c8c96d9d-6274-4310-a9c3-855a38413dda\" (UID: \"c8c96d9d-6274-4310-a9c3-855a38413dda\") " Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.373656 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0329c13d-bd93-45a8-82a3-b990aa22da35-operator-scripts\") pod \"0329c13d-bd93-45a8-82a3-b990aa22da35\" (UID: \"0329c13d-bd93-45a8-82a3-b990aa22da35\") " Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.373679 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztc2k\" (UniqueName: \"kubernetes.io/projected/0329c13d-bd93-45a8-82a3-b990aa22da35-kube-api-access-ztc2k\") pod \"0329c13d-bd93-45a8-82a3-b990aa22da35\" (UID: \"0329c13d-bd93-45a8-82a3-b990aa22da35\") " Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.373738 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6fd85fb-5a63-4f04-8c99-c03167e5e4a9-operator-scripts\") pod \"c6fd85fb-5a63-4f04-8c99-c03167e5e4a9\" (UID: \"c6fd85fb-5a63-4f04-8c99-c03167e5e4a9\") " Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.373791 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cqch\" (UniqueName: \"kubernetes.io/projected/c6fd85fb-5a63-4f04-8c99-c03167e5e4a9-kube-api-access-6cqch\") pod \"c6fd85fb-5a63-4f04-8c99-c03167e5e4a9\" (UID: \"c6fd85fb-5a63-4f04-8c99-c03167e5e4a9\") " Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.373812 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c96d9d-6274-4310-a9c3-855a38413dda-config\") pod \"c8c96d9d-6274-4310-a9c3-855a38413dda\" (UID: \"c8c96d9d-6274-4310-a9c3-855a38413dda\") " Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.374383 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0329c13d-bd93-45a8-82a3-b990aa22da35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0329c13d-bd93-45a8-82a3-b990aa22da35" (UID: "0329c13d-bd93-45a8-82a3-b990aa22da35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.374591 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6fd85fb-5a63-4f04-8c99-c03167e5e4a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6fd85fb-5a63-4f04-8c99-c03167e5e4a9" (UID: "c6fd85fb-5a63-4f04-8c99-c03167e5e4a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.374691 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cea78484-cb73-4b6c-bf1f-36e44fcb7cf4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cea78484-cb73-4b6c-bf1f-36e44fcb7cf4" (UID: "cea78484-cb73-4b6c-bf1f-36e44fcb7cf4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.377260 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cea78484-cb73-4b6c-bf1f-36e44fcb7cf4-kube-api-access-c557k" (OuterVolumeSpecName: "kube-api-access-c557k") pod "cea78484-cb73-4b6c-bf1f-36e44fcb7cf4" (UID: "cea78484-cb73-4b6c-bf1f-36e44fcb7cf4"). InnerVolumeSpecName "kube-api-access-c557k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.381924 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6fd85fb-5a63-4f04-8c99-c03167e5e4a9-kube-api-access-6cqch" (OuterVolumeSpecName: "kube-api-access-6cqch") pod "c6fd85fb-5a63-4f04-8c99-c03167e5e4a9" (UID: "c6fd85fb-5a63-4f04-8c99-c03167e5e4a9"). InnerVolumeSpecName "kube-api-access-6cqch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.382038 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0329c13d-bd93-45a8-82a3-b990aa22da35-kube-api-access-ztc2k" (OuterVolumeSpecName: "kube-api-access-ztc2k") pod "0329c13d-bd93-45a8-82a3-b990aa22da35" (UID: "0329c13d-bd93-45a8-82a3-b990aa22da35"). InnerVolumeSpecName "kube-api-access-ztc2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.382243 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8c96d9d-6274-4310-a9c3-855a38413dda-kube-api-access-nvs97" (OuterVolumeSpecName: "kube-api-access-nvs97") pod "c8c96d9d-6274-4310-a9c3-855a38413dda" (UID: "c8c96d9d-6274-4310-a9c3-855a38413dda"). InnerVolumeSpecName "kube-api-access-nvs97". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.411212 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8c96d9d-6274-4310-a9c3-855a38413dda-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8c96d9d-6274-4310-a9c3-855a38413dda" (UID: "c8c96d9d-6274-4310-a9c3-855a38413dda"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.413271 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8c96d9d-6274-4310-a9c3-855a38413dda-config" (OuterVolumeSpecName: "config") pod "c8c96d9d-6274-4310-a9c3-855a38413dda" (UID: "c8c96d9d-6274-4310-a9c3-855a38413dda"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.475458 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cea78484-cb73-4b6c-bf1f-36e44fcb7cf4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.475490 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c557k\" (UniqueName: \"kubernetes.io/projected/cea78484-cb73-4b6c-bf1f-36e44fcb7cf4-kube-api-access-c557k\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.475502 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8c96d9d-6274-4310-a9c3-855a38413dda-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.475510 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0329c13d-bd93-45a8-82a3-b990aa22da35-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.475519 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztc2k\" (UniqueName: \"kubernetes.io/projected/0329c13d-bd93-45a8-82a3-b990aa22da35-kube-api-access-ztc2k\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.475527 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6fd85fb-5a63-4f04-8c99-c03167e5e4a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.475537 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cqch\" (UniqueName: \"kubernetes.io/projected/c6fd85fb-5a63-4f04-8c99-c03167e5e4a9-kube-api-access-6cqch\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.475546 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8c96d9d-6274-4310-a9c3-855a38413dda-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.475555 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvs97\" (UniqueName: \"kubernetes.io/projected/c8c96d9d-6274-4310-a9c3-855a38413dda-kube-api-access-nvs97\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.563546 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c8be-account-create-update-8gx8g" event={"ID":"0329c13d-bd93-45a8-82a3-b990aa22da35","Type":"ContainerDied","Data":"4cf5f7048cf8a3e85013d2ef746efd1cb1a0fbbbb9bab64bb9ada842cc03dd87"} Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.564797 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cf5f7048cf8a3e85013d2ef746efd1cb1a0fbbbb9bab64bb9ada842cc03dd87" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.563620 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c8be-account-create-update-8gx8g" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.569571 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-893e-account-create-update-tjqh9" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.569514 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-893e-account-create-update-tjqh9" event={"ID":"cea78484-cb73-4b6c-bf1f-36e44fcb7cf4","Type":"ContainerDied","Data":"bb43877cc28fc96fbef21778202234c909c4db2bf2b922214193c1a2c70417c1"} Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.570686 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb43877cc28fc96fbef21778202234c909c4db2bf2b922214193c1a2c70417c1" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.572553 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-r2xkn" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.572553 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-r2xkn" event={"ID":"8912af28-37cb-4f57-b318-9e3724b13213","Type":"ContainerDied","Data":"d13a4a6d9cf0130fbf05f7a7c3770b0e7d0167bb97ae489c609a1e76b14e5ae7"} Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.572695 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d13a4a6d9cf0130fbf05f7a7c3770b0e7d0167bb97ae489c609a1e76b14e5ae7" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.578566 4981 generic.go:334] "Generic (PLEG): container finished" podID="c8c96d9d-6274-4310-a9c3-855a38413dda" containerID="f38c6bafe550f1e1034f532ea5bafd24b50fc368a5d0bec5780144b00e34310f" exitCode=0 Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.578625 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.578647 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" event={"ID":"c8c96d9d-6274-4310-a9c3-855a38413dda","Type":"ContainerDied","Data":"f38c6bafe550f1e1034f532ea5bafd24b50fc368a5d0bec5780144b00e34310f"} Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.578683 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-fwnkd" event={"ID":"c8c96d9d-6274-4310-a9c3-855a38413dda","Type":"ContainerDied","Data":"2b87994befb49c036ea076374fc83481d9c1a6d69a877834b59f47e2008ebccd"} Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.578705 4981 scope.go:117] "RemoveContainer" containerID="f38c6bafe550f1e1034f532ea5bafd24b50fc368a5d0bec5780144b00e34310f" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.581612 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-vxd4q" event={"ID":"c6fd85fb-5a63-4f04-8c99-c03167e5e4a9","Type":"ContainerDied","Data":"7fc52cea7d6ce0fcc9241916771c956da62079696b3a20695486f6f309c8cd48"} Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.581646 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fc52cea7d6ce0fcc9241916771c956da62079696b3a20695486f6f309c8cd48" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.581706 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-vxd4q" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.597904 4981 scope.go:117] "RemoveContainer" containerID="0a745bb45950f8a380c9451b07b9868ab46671e1545f01eae1bb83defabfccea" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.626919 4981 scope.go:117] "RemoveContainer" containerID="f38c6bafe550f1e1034f532ea5bafd24b50fc368a5d0bec5780144b00e34310f" Jan 28 15:21:07 crc kubenswrapper[4981]: E0128 15:21:07.627527 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f38c6bafe550f1e1034f532ea5bafd24b50fc368a5d0bec5780144b00e34310f\": container with ID starting with f38c6bafe550f1e1034f532ea5bafd24b50fc368a5d0bec5780144b00e34310f not found: ID does not exist" containerID="f38c6bafe550f1e1034f532ea5bafd24b50fc368a5d0bec5780144b00e34310f" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.627579 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38c6bafe550f1e1034f532ea5bafd24b50fc368a5d0bec5780144b00e34310f"} err="failed to get container status \"f38c6bafe550f1e1034f532ea5bafd24b50fc368a5d0bec5780144b00e34310f\": rpc error: code = NotFound desc = could not find container \"f38c6bafe550f1e1034f532ea5bafd24b50fc368a5d0bec5780144b00e34310f\": container with ID starting with f38c6bafe550f1e1034f532ea5bafd24b50fc368a5d0bec5780144b00e34310f not found: ID does not exist" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.627614 4981 scope.go:117] "RemoveContainer" containerID="0a745bb45950f8a380c9451b07b9868ab46671e1545f01eae1bb83defabfccea" Jan 28 15:21:07 crc kubenswrapper[4981]: E0128 15:21:07.628109 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a745bb45950f8a380c9451b07b9868ab46671e1545f01eae1bb83defabfccea\": container with ID starting with 0a745bb45950f8a380c9451b07b9868ab46671e1545f01eae1bb83defabfccea not found: ID does not exist" containerID="0a745bb45950f8a380c9451b07b9868ab46671e1545f01eae1bb83defabfccea" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.628166 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a745bb45950f8a380c9451b07b9868ab46671e1545f01eae1bb83defabfccea"} err="failed to get container status \"0a745bb45950f8a380c9451b07b9868ab46671e1545f01eae1bb83defabfccea\": rpc error: code = NotFound desc = could not find container \"0a745bb45950f8a380c9451b07b9868ab46671e1545f01eae1bb83defabfccea\": container with ID starting with 0a745bb45950f8a380c9451b07b9868ab46671e1545f01eae1bb83defabfccea not found: ID does not exist" Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.629662 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fwnkd"] Jan 28 15:21:07 crc kubenswrapper[4981]: I0128 15:21:07.636718 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-fwnkd"] Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.909291 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-dzkv7"] Jan 28 15:21:08 crc kubenswrapper[4981]: E0128 15:21:08.909582 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cea78484-cb73-4b6c-bf1f-36e44fcb7cf4" containerName="mariadb-account-create-update" Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.909596 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cea78484-cb73-4b6c-bf1f-36e44fcb7cf4" containerName="mariadb-account-create-update" Jan 28 15:21:08 crc kubenswrapper[4981]: E0128 15:21:08.909615 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c96d9d-6274-4310-a9c3-855a38413dda" containerName="init" Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.909621 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c96d9d-6274-4310-a9c3-855a38413dda" containerName="init" Jan 28 15:21:08 crc kubenswrapper[4981]: E0128 15:21:08.909637 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8912af28-37cb-4f57-b318-9e3724b13213" containerName="mariadb-database-create" Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.909643 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="8912af28-37cb-4f57-b318-9e3724b13213" containerName="mariadb-database-create" Jan 28 15:21:08 crc kubenswrapper[4981]: E0128 15:21:08.909655 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6fd85fb-5a63-4f04-8c99-c03167e5e4a9" containerName="mariadb-database-create" Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.909662 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6fd85fb-5a63-4f04-8c99-c03167e5e4a9" containerName="mariadb-database-create" Jan 28 15:21:08 crc kubenswrapper[4981]: E0128 15:21:08.909683 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0329c13d-bd93-45a8-82a3-b990aa22da35" containerName="mariadb-account-create-update" Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.909689 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0329c13d-bd93-45a8-82a3-b990aa22da35" containerName="mariadb-account-create-update" Jan 28 15:21:08 crc kubenswrapper[4981]: E0128 15:21:08.909701 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7971af1-53aa-4b9f-a6a0-179dc46d6519" containerName="mariadb-account-create-update" Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.909706 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7971af1-53aa-4b9f-a6a0-179dc46d6519" containerName="mariadb-account-create-update" Jan 28 15:21:08 crc kubenswrapper[4981]: E0128 15:21:08.909716 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c96d9d-6274-4310-a9c3-855a38413dda" containerName="dnsmasq-dns" Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.909722 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c96d9d-6274-4310-a9c3-855a38413dda" containerName="dnsmasq-dns" Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.909868 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cea78484-cb73-4b6c-bf1f-36e44fcb7cf4" containerName="mariadb-account-create-update" Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.909876 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="8912af28-37cb-4f57-b318-9e3724b13213" containerName="mariadb-database-create" Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.909885 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0329c13d-bd93-45a8-82a3-b990aa22da35" containerName="mariadb-account-create-update" Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.909893 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8c96d9d-6274-4310-a9c3-855a38413dda" containerName="dnsmasq-dns" Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.909904 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7971af1-53aa-4b9f-a6a0-179dc46d6519" containerName="mariadb-account-create-update" Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.909913 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6fd85fb-5a63-4f04-8c99-c03167e5e4a9" containerName="mariadb-database-create" Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.910413 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dzkv7" Jan 28 15:21:08 crc kubenswrapper[4981]: I0128 15:21:08.924787 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dzkv7"] Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.001782 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8k8\" (UniqueName: \"kubernetes.io/projected/b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44-kube-api-access-6h8k8\") pod \"glance-db-create-dzkv7\" (UID: \"b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44\") " pod="openstack/glance-db-create-dzkv7" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.001848 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44-operator-scripts\") pod \"glance-db-create-dzkv7\" (UID: \"b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44\") " pod="openstack/glance-db-create-dzkv7" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.018665 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9e9b-account-create-update-78cpp"] Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.020261 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9e9b-account-create-update-78cpp" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.022479 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.042355 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9e9b-account-create-update-78cpp"] Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.103356 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88e9ea9c-6f3b-425a-bcb0-39d66e4040ee-operator-scripts\") pod \"glance-9e9b-account-create-update-78cpp\" (UID: \"88e9ea9c-6f3b-425a-bcb0-39d66e4040ee\") " pod="openstack/glance-9e9b-account-create-update-78cpp" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.103766 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqggl\" (UniqueName: \"kubernetes.io/projected/88e9ea9c-6f3b-425a-bcb0-39d66e4040ee-kube-api-access-cqggl\") pod \"glance-9e9b-account-create-update-78cpp\" (UID: \"88e9ea9c-6f3b-425a-bcb0-39d66e4040ee\") " pod="openstack/glance-9e9b-account-create-update-78cpp" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.103906 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h8k8\" (UniqueName: \"kubernetes.io/projected/b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44-kube-api-access-6h8k8\") pod \"glance-db-create-dzkv7\" (UID: \"b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44\") " pod="openstack/glance-db-create-dzkv7" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.104028 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44-operator-scripts\") pod \"glance-db-create-dzkv7\" (UID: \"b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44\") " pod="openstack/glance-db-create-dzkv7" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.105078 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44-operator-scripts\") pod \"glance-db-create-dzkv7\" (UID: \"b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44\") " pod="openstack/glance-db-create-dzkv7" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.126719 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h8k8\" (UniqueName: \"kubernetes.io/projected/b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44-kube-api-access-6h8k8\") pod \"glance-db-create-dzkv7\" (UID: \"b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44\") " pod="openstack/glance-db-create-dzkv7" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.205396 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88e9ea9c-6f3b-425a-bcb0-39d66e4040ee-operator-scripts\") pod \"glance-9e9b-account-create-update-78cpp\" (UID: \"88e9ea9c-6f3b-425a-bcb0-39d66e4040ee\") " pod="openstack/glance-9e9b-account-create-update-78cpp" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.205551 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqggl\" (UniqueName: \"kubernetes.io/projected/88e9ea9c-6f3b-425a-bcb0-39d66e4040ee-kube-api-access-cqggl\") pod \"glance-9e9b-account-create-update-78cpp\" (UID: \"88e9ea9c-6f3b-425a-bcb0-39d66e4040ee\") " pod="openstack/glance-9e9b-account-create-update-78cpp" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.206097 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88e9ea9c-6f3b-425a-bcb0-39d66e4040ee-operator-scripts\") pod \"glance-9e9b-account-create-update-78cpp\" (UID: \"88e9ea9c-6f3b-425a-bcb0-39d66e4040ee\") " pod="openstack/glance-9e9b-account-create-update-78cpp" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.222543 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqggl\" (UniqueName: \"kubernetes.io/projected/88e9ea9c-6f3b-425a-bcb0-39d66e4040ee-kube-api-access-cqggl\") pod \"glance-9e9b-account-create-update-78cpp\" (UID: \"88e9ea9c-6f3b-425a-bcb0-39d66e4040ee\") " pod="openstack/glance-9e9b-account-create-update-78cpp" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.226810 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dzkv7" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.341996 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9e9b-account-create-update-78cpp" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.359076 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8c96d9d-6274-4310-a9c3-855a38413dda" path="/var/lib/kubelet/pods/c8c96d9d-6274-4310-a9c3-855a38413dda/volumes" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.603846 4981 generic.go:334] "Generic (PLEG): container finished" podID="a59295bc-49fa-4b41-b2a1-3c19c27292e5" containerID="d946ab64400dc81e4f7bc6a25b3466f28778baa5e6d6a4b604cf92ba140fe171" exitCode=0 Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.603886 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zs4xh" event={"ID":"a59295bc-49fa-4b41-b2a1-3c19c27292e5","Type":"ContainerDied","Data":"d946ab64400dc81e4f7bc6a25b3466f28778baa5e6d6a4b604cf92ba140fe171"} Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.792733 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-dzkv7"] Jan 28 15:21:09 crc kubenswrapper[4981]: W0128 15:21:09.885328 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88e9ea9c_6f3b_425a_bcb0_39d66e4040ee.slice/crio-14ce6c202565489556499b415ffddfc212d7198b754914b2973836096d81e269 WatchSource:0}: Error finding container 14ce6c202565489556499b415ffddfc212d7198b754914b2973836096d81e269: Status 404 returned error can't find the container with id 14ce6c202565489556499b415ffddfc212d7198b754914b2973836096d81e269 Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.889314 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 28 15:21:09 crc kubenswrapper[4981]: I0128 15:21:09.891523 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9e9b-account-create-update-78cpp"] Jan 28 15:21:10 crc kubenswrapper[4981]: I0128 15:21:10.614717 4981 generic.go:334] "Generic (PLEG): container finished" podID="88e9ea9c-6f3b-425a-bcb0-39d66e4040ee" containerID="ace9fb9ce756a588c3c6affd7a1ce90201333b645faaccb74cbb6a00176e19a6" exitCode=0 Jan 28 15:21:10 crc kubenswrapper[4981]: I0128 15:21:10.614800 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9e9b-account-create-update-78cpp" event={"ID":"88e9ea9c-6f3b-425a-bcb0-39d66e4040ee","Type":"ContainerDied","Data":"ace9fb9ce756a588c3c6affd7a1ce90201333b645faaccb74cbb6a00176e19a6"} Jan 28 15:21:10 crc kubenswrapper[4981]: I0128 15:21:10.615238 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9e9b-account-create-update-78cpp" event={"ID":"88e9ea9c-6f3b-425a-bcb0-39d66e4040ee","Type":"ContainerStarted","Data":"14ce6c202565489556499b415ffddfc212d7198b754914b2973836096d81e269"} Jan 28 15:21:10 crc kubenswrapper[4981]: I0128 15:21:10.617474 4981 generic.go:334] "Generic (PLEG): container finished" podID="b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44" containerID="9d0d7f2c595ed34b6b746b5b2a439c9cebe1ed66d245adb0eb82d732491664d4" exitCode=0 Jan 28 15:21:10 crc kubenswrapper[4981]: I0128 15:21:10.617564 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dzkv7" event={"ID":"b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44","Type":"ContainerDied","Data":"9d0d7f2c595ed34b6b746b5b2a439c9cebe1ed66d245adb0eb82d732491664d4"} Jan 28 15:21:10 crc kubenswrapper[4981]: I0128 15:21:10.617601 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dzkv7" event={"ID":"b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44","Type":"ContainerStarted","Data":"1502f982a04b90ddee61d43964e590c6be8ac8181af45711bacc7543cd8a6ad9"} Jan 28 15:21:10 crc kubenswrapper[4981]: I0128 15:21:10.785405 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-d22jd"] Jan 28 15:21:10 crc kubenswrapper[4981]: I0128 15:21:10.801860 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-d22jd"] Jan 28 15:21:10 crc kubenswrapper[4981]: I0128 15:21:10.887419 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-58dz7"] Jan 28 15:21:10 crc kubenswrapper[4981]: I0128 15:21:10.888526 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-58dz7" Jan 28 15:21:10 crc kubenswrapper[4981]: I0128 15:21:10.890610 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 15:21:10 crc kubenswrapper[4981]: I0128 15:21:10.901512 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-58dz7"] Jan 28 15:21:10 crc kubenswrapper[4981]: I0128 15:21:10.979679 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.047841 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dad862f8-3d07-4a51-81a2-c6d7226f8ba1-operator-scripts\") pod \"root-account-create-update-58dz7\" (UID: \"dad862f8-3d07-4a51-81a2-c6d7226f8ba1\") " pod="openstack/root-account-create-update-58dz7" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.047995 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cx6n\" (UniqueName: \"kubernetes.io/projected/dad862f8-3d07-4a51-81a2-c6d7226f8ba1-kube-api-access-7cx6n\") pod \"root-account-create-update-58dz7\" (UID: \"dad862f8-3d07-4a51-81a2-c6d7226f8ba1\") " pod="openstack/root-account-create-update-58dz7" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.051512 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.152771 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-swiftconf\") pod \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.152843 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phqf8\" (UniqueName: \"kubernetes.io/projected/a59295bc-49fa-4b41-b2a1-3c19c27292e5-kube-api-access-phqf8\") pod \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.152897 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a59295bc-49fa-4b41-b2a1-3c19c27292e5-ring-data-devices\") pod \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.152954 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a59295bc-49fa-4b41-b2a1-3c19c27292e5-scripts\") pod \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.152976 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-combined-ca-bundle\") pod \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.153078 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a59295bc-49fa-4b41-b2a1-3c19c27292e5-etc-swift\") pod \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.153105 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-dispersionconf\") pod \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\" (UID: \"a59295bc-49fa-4b41-b2a1-3c19c27292e5\") " Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.153423 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dad862f8-3d07-4a51-81a2-c6d7226f8ba1-operator-scripts\") pod \"root-account-create-update-58dz7\" (UID: \"dad862f8-3d07-4a51-81a2-c6d7226f8ba1\") " pod="openstack/root-account-create-update-58dz7" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.153587 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cx6n\" (UniqueName: \"kubernetes.io/projected/dad862f8-3d07-4a51-81a2-c6d7226f8ba1-kube-api-access-7cx6n\") pod \"root-account-create-update-58dz7\" (UID: \"dad862f8-3d07-4a51-81a2-c6d7226f8ba1\") " pod="openstack/root-account-create-update-58dz7" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.158571 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59295bc-49fa-4b41-b2a1-3c19c27292e5-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "a59295bc-49fa-4b41-b2a1-3c19c27292e5" (UID: "a59295bc-49fa-4b41-b2a1-3c19c27292e5"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.159437 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dad862f8-3d07-4a51-81a2-c6d7226f8ba1-operator-scripts\") pod \"root-account-create-update-58dz7\" (UID: \"dad862f8-3d07-4a51-81a2-c6d7226f8ba1\") " pod="openstack/root-account-create-update-58dz7" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.169874 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a59295bc-49fa-4b41-b2a1-3c19c27292e5-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "a59295bc-49fa-4b41-b2a1-3c19c27292e5" (UID: "a59295bc-49fa-4b41-b2a1-3c19c27292e5"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.175887 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a59295bc-49fa-4b41-b2a1-3c19c27292e5-kube-api-access-phqf8" (OuterVolumeSpecName: "kube-api-access-phqf8") pod "a59295bc-49fa-4b41-b2a1-3c19c27292e5" (UID: "a59295bc-49fa-4b41-b2a1-3c19c27292e5"). InnerVolumeSpecName "kube-api-access-phqf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.177812 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "a59295bc-49fa-4b41-b2a1-3c19c27292e5" (UID: "a59295bc-49fa-4b41-b2a1-3c19c27292e5"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.185269 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cx6n\" (UniqueName: \"kubernetes.io/projected/dad862f8-3d07-4a51-81a2-c6d7226f8ba1-kube-api-access-7cx6n\") pod \"root-account-create-update-58dz7\" (UID: \"dad862f8-3d07-4a51-81a2-c6d7226f8ba1\") " pod="openstack/root-account-create-update-58dz7" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.192081 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a59295bc-49fa-4b41-b2a1-3c19c27292e5" (UID: "a59295bc-49fa-4b41-b2a1-3c19c27292e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.193160 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a59295bc-49fa-4b41-b2a1-3c19c27292e5-scripts" (OuterVolumeSpecName: "scripts") pod "a59295bc-49fa-4b41-b2a1-3c19c27292e5" (UID: "a59295bc-49fa-4b41-b2a1-3c19c27292e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.213379 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "a59295bc-49fa-4b41-b2a1-3c19c27292e5" (UID: "a59295bc-49fa-4b41-b2a1-3c19c27292e5"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.221518 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-58dz7" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.255112 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.255157 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a59295bc-49fa-4b41-b2a1-3c19c27292e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.255166 4981 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/a59295bc-49fa-4b41-b2a1-3c19c27292e5-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.255174 4981 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.255182 4981 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/a59295bc-49fa-4b41-b2a1-3c19c27292e5-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.255215 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phqf8\" (UniqueName: \"kubernetes.io/projected/a59295bc-49fa-4b41-b2a1-3c19c27292e5-kube-api-access-phqf8\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.255225 4981 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/a59295bc-49fa-4b41-b2a1-3c19c27292e5-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.327473 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7971af1-53aa-4b9f-a6a0-179dc46d6519" path="/var/lib/kubelet/pods/d7971af1-53aa-4b9f-a6a0-179dc46d6519/volumes" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.631692 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-zs4xh" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.632489 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-zs4xh" event={"ID":"a59295bc-49fa-4b41-b2a1-3c19c27292e5","Type":"ContainerDied","Data":"8f927628342d0635c8e85affa0e7885ef1ef90c838ec8d032244ec7bc6e4fd7b"} Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.632596 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f927628342d0635c8e85affa0e7885ef1ef90c838ec8d032244ec7bc6e4fd7b" Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.711280 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-58dz7"] Jan 28 15:21:11 crc kubenswrapper[4981]: W0128 15:21:11.728973 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddad862f8_3d07_4a51_81a2_c6d7226f8ba1.slice/crio-e0f5d9367f1f90275feaaba1a1432185425cb5eae7d7095b8846c8fd90dba50c WatchSource:0}: Error finding container e0f5d9367f1f90275feaaba1a1432185425cb5eae7d7095b8846c8fd90dba50c: Status 404 returned error can't find the container with id e0f5d9367f1f90275feaaba1a1432185425cb5eae7d7095b8846c8fd90dba50c Jan 28 15:21:11 crc kubenswrapper[4981]: I0128 15:21:11.952229 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9e9b-account-create-update-78cpp" Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.030595 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dzkv7" Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.083693 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88e9ea9c-6f3b-425a-bcb0-39d66e4040ee-operator-scripts\") pod \"88e9ea9c-6f3b-425a-bcb0-39d66e4040ee\" (UID: \"88e9ea9c-6f3b-425a-bcb0-39d66e4040ee\") " Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.084061 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqggl\" (UniqueName: \"kubernetes.io/projected/88e9ea9c-6f3b-425a-bcb0-39d66e4040ee-kube-api-access-cqggl\") pod \"88e9ea9c-6f3b-425a-bcb0-39d66e4040ee\" (UID: \"88e9ea9c-6f3b-425a-bcb0-39d66e4040ee\") " Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.084479 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88e9ea9c-6f3b-425a-bcb0-39d66e4040ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88e9ea9c-6f3b-425a-bcb0-39d66e4040ee" (UID: "88e9ea9c-6f3b-425a-bcb0-39d66e4040ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.084844 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88e9ea9c-6f3b-425a-bcb0-39d66e4040ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.090440 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e9ea9c-6f3b-425a-bcb0-39d66e4040ee-kube-api-access-cqggl" (OuterVolumeSpecName: "kube-api-access-cqggl") pod "88e9ea9c-6f3b-425a-bcb0-39d66e4040ee" (UID: "88e9ea9c-6f3b-425a-bcb0-39d66e4040ee"). InnerVolumeSpecName "kube-api-access-cqggl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.185798 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h8k8\" (UniqueName: \"kubernetes.io/projected/b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44-kube-api-access-6h8k8\") pod \"b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44\" (UID: \"b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44\") " Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.185932 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44-operator-scripts\") pod \"b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44\" (UID: \"b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44\") " Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.186398 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqggl\" (UniqueName: \"kubernetes.io/projected/88e9ea9c-6f3b-425a-bcb0-39d66e4040ee-kube-api-access-cqggl\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.186629 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44" (UID: "b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.198599 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44-kube-api-access-6h8k8" (OuterVolumeSpecName: "kube-api-access-6h8k8") pod "b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44" (UID: "b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44"). InnerVolumeSpecName "kube-api-access-6h8k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.287874 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.287910 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h8k8\" (UniqueName: \"kubernetes.io/projected/b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44-kube-api-access-6h8k8\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.639901 4981 generic.go:334] "Generic (PLEG): container finished" podID="dad862f8-3d07-4a51-81a2-c6d7226f8ba1" containerID="ef25cfad4afdd200887a4c460764516f9bfc87dbcc228f662344596b5b6db633" exitCode=0 Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.639982 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-58dz7" event={"ID":"dad862f8-3d07-4a51-81a2-c6d7226f8ba1","Type":"ContainerDied","Data":"ef25cfad4afdd200887a4c460764516f9bfc87dbcc228f662344596b5b6db633"} Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.640011 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-58dz7" event={"ID":"dad862f8-3d07-4a51-81a2-c6d7226f8ba1","Type":"ContainerStarted","Data":"e0f5d9367f1f90275feaaba1a1432185425cb5eae7d7095b8846c8fd90dba50c"} Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.641299 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-dzkv7" event={"ID":"b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44","Type":"ContainerDied","Data":"1502f982a04b90ddee61d43964e590c6be8ac8181af45711bacc7543cd8a6ad9"} Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.641330 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1502f982a04b90ddee61d43964e590c6be8ac8181af45711bacc7543cd8a6ad9" Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.641406 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-dzkv7" Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.643884 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9e9b-account-create-update-78cpp" event={"ID":"88e9ea9c-6f3b-425a-bcb0-39d66e4040ee","Type":"ContainerDied","Data":"14ce6c202565489556499b415ffddfc212d7198b754914b2973836096d81e269"} Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.643965 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14ce6c202565489556499b415ffddfc212d7198b754914b2973836096d81e269" Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.643988 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9e9b-account-create-update-78cpp" Jan 28 15:21:12 crc kubenswrapper[4981]: I0128 15:21:12.999223 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:21:13 crc kubenswrapper[4981]: I0128 15:21:13.010477 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/a3c5f4dc-185e-4293-9853-f16cde7997fa-etc-swift\") pod \"swift-storage-0\" (UID: \"a3c5f4dc-185e-4293-9853-f16cde7997fa\") " pod="openstack/swift-storage-0" Jan 28 15:21:13 crc kubenswrapper[4981]: I0128 15:21:13.276773 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 15:21:13 crc kubenswrapper[4981]: I0128 15:21:13.936001 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 15:21:13 crc kubenswrapper[4981]: W0128 15:21:13.941741 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3c5f4dc_185e_4293_9853_f16cde7997fa.slice/crio-ca66169f15a225ccd2225f395a10880a47c5ca3f368ce68b5e748d1f936110a7 WatchSource:0}: Error finding container ca66169f15a225ccd2225f395a10880a47c5ca3f368ce68b5e748d1f936110a7: Status 404 returned error can't find the container with id ca66169f15a225ccd2225f395a10880a47c5ca3f368ce68b5e748d1f936110a7 Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.025581 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-58dz7" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.117313 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cx6n\" (UniqueName: \"kubernetes.io/projected/dad862f8-3d07-4a51-81a2-c6d7226f8ba1-kube-api-access-7cx6n\") pod \"dad862f8-3d07-4a51-81a2-c6d7226f8ba1\" (UID: \"dad862f8-3d07-4a51-81a2-c6d7226f8ba1\") " Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.118000 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dad862f8-3d07-4a51-81a2-c6d7226f8ba1-operator-scripts\") pod \"dad862f8-3d07-4a51-81a2-c6d7226f8ba1\" (UID: \"dad862f8-3d07-4a51-81a2-c6d7226f8ba1\") " Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.118622 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dad862f8-3d07-4a51-81a2-c6d7226f8ba1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dad862f8-3d07-4a51-81a2-c6d7226f8ba1" (UID: "dad862f8-3d07-4a51-81a2-c6d7226f8ba1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.125445 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dad862f8-3d07-4a51-81a2-c6d7226f8ba1-kube-api-access-7cx6n" (OuterVolumeSpecName: "kube-api-access-7cx6n") pod "dad862f8-3d07-4a51-81a2-c6d7226f8ba1" (UID: "dad862f8-3d07-4a51-81a2-c6d7226f8ba1"). InnerVolumeSpecName "kube-api-access-7cx6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.161672 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6qpmd"] Jan 28 15:21:14 crc kubenswrapper[4981]: E0128 15:21:14.162029 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dad862f8-3d07-4a51-81a2-c6d7226f8ba1" containerName="mariadb-account-create-update" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.162049 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="dad862f8-3d07-4a51-81a2-c6d7226f8ba1" containerName="mariadb-account-create-update" Jan 28 15:21:14 crc kubenswrapper[4981]: E0128 15:21:14.162070 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a59295bc-49fa-4b41-b2a1-3c19c27292e5" containerName="swift-ring-rebalance" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.162079 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a59295bc-49fa-4b41-b2a1-3c19c27292e5" containerName="swift-ring-rebalance" Jan 28 15:21:14 crc kubenswrapper[4981]: E0128 15:21:14.162092 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e9ea9c-6f3b-425a-bcb0-39d66e4040ee" containerName="mariadb-account-create-update" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.162101 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e9ea9c-6f3b-425a-bcb0-39d66e4040ee" containerName="mariadb-account-create-update" Jan 28 15:21:14 crc kubenswrapper[4981]: E0128 15:21:14.162126 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44" containerName="mariadb-database-create" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.162135 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44" containerName="mariadb-database-create" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.162340 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a59295bc-49fa-4b41-b2a1-3c19c27292e5" containerName="swift-ring-rebalance" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.162353 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44" containerName="mariadb-database-create" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.162373 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e9ea9c-6f3b-425a-bcb0-39d66e4040ee" containerName="mariadb-account-create-update" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.162389 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="dad862f8-3d07-4a51-81a2-c6d7226f8ba1" containerName="mariadb-account-create-update" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.162989 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.165001 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.165948 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hq4bg" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.173594 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6qpmd"] Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.220159 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dad862f8-3d07-4a51-81a2-c6d7226f8ba1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.220227 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cx6n\" (UniqueName: \"kubernetes.io/projected/dad862f8-3d07-4a51-81a2-c6d7226f8ba1-kube-api-access-7cx6n\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.321543 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6g8v\" (UniqueName: \"kubernetes.io/projected/f5f8f119-df95-4eef-979b-ae7d2cd54f00-kube-api-access-w6g8v\") pod \"glance-db-sync-6qpmd\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.321618 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-db-sync-config-data\") pod \"glance-db-sync-6qpmd\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.321648 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-combined-ca-bundle\") pod \"glance-db-sync-6qpmd\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.321781 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-config-data\") pod \"glance-db-sync-6qpmd\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.424205 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-db-sync-config-data\") pod \"glance-db-sync-6qpmd\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.424272 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-combined-ca-bundle\") pod \"glance-db-sync-6qpmd\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.424407 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-config-data\") pod \"glance-db-sync-6qpmd\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.424511 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6g8v\" (UniqueName: \"kubernetes.io/projected/f5f8f119-df95-4eef-979b-ae7d2cd54f00-kube-api-access-w6g8v\") pod \"glance-db-sync-6qpmd\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.428885 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-config-data\") pod \"glance-db-sync-6qpmd\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.430413 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-db-sync-config-data\") pod \"glance-db-sync-6qpmd\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.431688 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-combined-ca-bundle\") pod \"glance-db-sync-6qpmd\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.466314 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6g8v\" (UniqueName: \"kubernetes.io/projected/f5f8f119-df95-4eef-979b-ae7d2cd54f00-kube-api-access-w6g8v\") pod \"glance-db-sync-6qpmd\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.477880 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.672975 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-58dz7" event={"ID":"dad862f8-3d07-4a51-81a2-c6d7226f8ba1","Type":"ContainerDied","Data":"e0f5d9367f1f90275feaaba1a1432185425cb5eae7d7095b8846c8fd90dba50c"} Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.673383 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0f5d9367f1f90275feaaba1a1432185425cb5eae7d7095b8846c8fd90dba50c" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.673443 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-58dz7" Jan 28 15:21:14 crc kubenswrapper[4981]: I0128 15:21:14.678904 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"ca66169f15a225ccd2225f395a10880a47c5ca3f368ce68b5e748d1f936110a7"} Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.048626 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6qpmd"] Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.184144 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-bnkpb" podUID="8109b11f-0a6a-4894-b7f7-c6d46a62570e" containerName="ovn-controller" probeResult="failure" output=< Jan 28 15:21:15 crc kubenswrapper[4981]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 15:21:15 crc kubenswrapper[4981]: > Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.217600 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.234210 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-c8dt7" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.436327 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-bnkpb-config-24t6d"] Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.443845 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.446620 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.450759 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bnkpb-config-24t6d"] Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.544150 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-log-ovn\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.544693 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-run-ovn\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.544729 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rclp6\" (UniqueName: \"kubernetes.io/projected/9744a838-2a91-46ee-bd31-524304313885-kube-api-access-rclp6\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.544867 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-run\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.544899 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9744a838-2a91-46ee-bd31-524304313885-scripts\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.544973 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9744a838-2a91-46ee-bd31-524304313885-additional-scripts\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.646161 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9744a838-2a91-46ee-bd31-524304313885-additional-scripts\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.646302 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-log-ovn\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.646373 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-run-ovn\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.646402 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rclp6\" (UniqueName: \"kubernetes.io/projected/9744a838-2a91-46ee-bd31-524304313885-kube-api-access-rclp6\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.646476 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-run\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.646510 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9744a838-2a91-46ee-bd31-524304313885-scripts\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.646676 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-log-ovn\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.646691 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-run-ovn\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.646734 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-run\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.646846 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9744a838-2a91-46ee-bd31-524304313885-additional-scripts\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.648961 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9744a838-2a91-46ee-bd31-524304313885-scripts\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.663865 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rclp6\" (UniqueName: \"kubernetes.io/projected/9744a838-2a91-46ee-bd31-524304313885-kube-api-access-rclp6\") pod \"ovn-controller-bnkpb-config-24t6d\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.689641 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"9ec1ac878ebf117d45a7269001f306d118c4504188e8e571b7b0512f383ee304"} Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.689692 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"55542a20c500d0e20764f6b7ad058907ec9fcf0079cabc1e3e5b3314fa39b0a4"} Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.689708 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"0568bb57fbc81b7258453ebdb44d2707e54788f2aab7f659da3c0f6df52d1983"} Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.691255 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6qpmd" event={"ID":"f5f8f119-df95-4eef-979b-ae7d2cd54f00","Type":"ContainerStarted","Data":"389840b1f95478de110c38dde5ab8fe82ffd0b601df1128e6843108913ed22f6"} Jan 28 15:21:15 crc kubenswrapper[4981]: I0128 15:21:15.843878 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:16 crc kubenswrapper[4981]: I0128 15:21:16.252901 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-bnkpb-config-24t6d"] Jan 28 15:21:16 crc kubenswrapper[4981]: W0128 15:21:16.264917 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9744a838_2a91_46ee_bd31_524304313885.slice/crio-2142ffbbb06ef8f560d39880ab03984866d6f7489c5fbacaada79eda43f98d77 WatchSource:0}: Error finding container 2142ffbbb06ef8f560d39880ab03984866d6f7489c5fbacaada79eda43f98d77: Status 404 returned error can't find the container with id 2142ffbbb06ef8f560d39880ab03984866d6f7489c5fbacaada79eda43f98d77 Jan 28 15:21:16 crc kubenswrapper[4981]: I0128 15:21:16.706971 4981 generic.go:334] "Generic (PLEG): container finished" podID="5cccad1c-80c8-4806-a093-ecb1ad203f3c" containerID="ed3fa028e256ef52d67123bf375679a669443697914c1d8322591cd65286f694" exitCode=0 Jan 28 15:21:16 crc kubenswrapper[4981]: I0128 15:21:16.707112 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5cccad1c-80c8-4806-a093-ecb1ad203f3c","Type":"ContainerDied","Data":"ed3fa028e256ef52d67123bf375679a669443697914c1d8322591cd65286f694"} Jan 28 15:21:16 crc kubenswrapper[4981]: I0128 15:21:16.714735 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"804af75cc9aef8439acebb533584de3896c0354819305eb4485bc82aa1489fc8"} Jan 28 15:21:16 crc kubenswrapper[4981]: I0128 15:21:16.716300 4981 generic.go:334] "Generic (PLEG): container finished" podID="6456c27c-6d70-453b-a759-b6411aa67f51" containerID="ae54a8260c30b63b6c7115a3e7a119595f296196630adf0f1e2c402962c61321" exitCode=0 Jan 28 15:21:16 crc kubenswrapper[4981]: I0128 15:21:16.716436 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6456c27c-6d70-453b-a759-b6411aa67f51","Type":"ContainerDied","Data":"ae54a8260c30b63b6c7115a3e7a119595f296196630adf0f1e2c402962c61321"} Jan 28 15:21:16 crc kubenswrapper[4981]: I0128 15:21:16.717531 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bnkpb-config-24t6d" event={"ID":"9744a838-2a91-46ee-bd31-524304313885","Type":"ContainerStarted","Data":"2142ffbbb06ef8f560d39880ab03984866d6f7489c5fbacaada79eda43f98d77"} Jan 28 15:21:17 crc kubenswrapper[4981]: I0128 15:21:17.727246 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6456c27c-6d70-453b-a759-b6411aa67f51","Type":"ContainerStarted","Data":"af06c90b2e043d7627bd881a6b21cfdb96d65ceaa451a398a6eea4739e5ba22a"} Jan 28 15:21:17 crc kubenswrapper[4981]: I0128 15:21:17.727646 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 15:21:17 crc kubenswrapper[4981]: I0128 15:21:17.729547 4981 generic.go:334] "Generic (PLEG): container finished" podID="9744a838-2a91-46ee-bd31-524304313885" containerID="3dfd3c0b41fc2054bb12a9b0e6266a0b848761f45f1a6627405d4920c9e8018c" exitCode=0 Jan 28 15:21:17 crc kubenswrapper[4981]: I0128 15:21:17.729597 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bnkpb-config-24t6d" event={"ID":"9744a838-2a91-46ee-bd31-524304313885","Type":"ContainerDied","Data":"3dfd3c0b41fc2054bb12a9b0e6266a0b848761f45f1a6627405d4920c9e8018c"} Jan 28 15:21:17 crc kubenswrapper[4981]: I0128 15:21:17.732114 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5cccad1c-80c8-4806-a093-ecb1ad203f3c","Type":"ContainerStarted","Data":"6bf5c589eda06e1fde576e47b8606fa08955ff587665638e00849bbbfc2e3b6b"} Jan 28 15:21:17 crc kubenswrapper[4981]: I0128 15:21:17.732337 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:21:17 crc kubenswrapper[4981]: I0128 15:21:17.753260 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=49.794946423 podStartE2EDuration="58.75323699s" podCreationTimestamp="2026-01-28 15:20:19 +0000 UTC" firstStartedPulling="2026-01-28 15:20:33.13820159 +0000 UTC m=+1044.590359831" lastFinishedPulling="2026-01-28 15:20:42.096492157 +0000 UTC m=+1053.548650398" observedRunningTime="2026-01-28 15:21:17.75136009 +0000 UTC m=+1089.203518331" watchObservedRunningTime="2026-01-28 15:21:17.75323699 +0000 UTC m=+1089.205395231" Jan 28 15:21:17 crc kubenswrapper[4981]: I0128 15:21:17.802438 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=50.23423857 podStartE2EDuration="58.802415296s" podCreationTimestamp="2026-01-28 15:20:19 +0000 UTC" firstStartedPulling="2026-01-28 15:20:33.528183497 +0000 UTC m=+1044.980341738" lastFinishedPulling="2026-01-28 15:20:42.096360183 +0000 UTC m=+1053.548518464" observedRunningTime="2026-01-28 15:21:17.796292874 +0000 UTC m=+1089.248451115" watchObservedRunningTime="2026-01-28 15:21:17.802415296 +0000 UTC m=+1089.254573537" Jan 28 15:21:18 crc kubenswrapper[4981]: I0128 15:21:18.750246 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"f7dc8cbc6f6abe6504190fe1d0a677e16795ac57e668cd45157c95cbc7158c35"} Jan 28 15:21:18 crc kubenswrapper[4981]: I0128 15:21:18.750711 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"ca53d980cb3cb4f41b3400493621979efb855fa882ad106e6e81a80e19cc3cbe"} Jan 28 15:21:18 crc kubenswrapper[4981]: I0128 15:21:18.750728 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"be3a8454c7f90cca133e27e49129ec94713e5bd81625cbc957c437cf91e22b30"} Jan 28 15:21:18 crc kubenswrapper[4981]: I0128 15:21:18.750741 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"5c5a17e539bd5d879f55a116ed7a43be9386a4af185b72fa7bfcf3e40d80c198"} Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.094683 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.215844 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-run\") pod \"9744a838-2a91-46ee-bd31-524304313885\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.215973 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-run" (OuterVolumeSpecName: "var-run") pod "9744a838-2a91-46ee-bd31-524304313885" (UID: "9744a838-2a91-46ee-bd31-524304313885"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.215993 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9744a838-2a91-46ee-bd31-524304313885-scripts\") pod \"9744a838-2a91-46ee-bd31-524304313885\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.216167 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9744a838-2a91-46ee-bd31-524304313885-additional-scripts\") pod \"9744a838-2a91-46ee-bd31-524304313885\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.216340 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-run-ovn\") pod \"9744a838-2a91-46ee-bd31-524304313885\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.216385 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rclp6\" (UniqueName: \"kubernetes.io/projected/9744a838-2a91-46ee-bd31-524304313885-kube-api-access-rclp6\") pod \"9744a838-2a91-46ee-bd31-524304313885\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.216421 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-log-ovn\") pod \"9744a838-2a91-46ee-bd31-524304313885\" (UID: \"9744a838-2a91-46ee-bd31-524304313885\") " Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.216484 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9744a838-2a91-46ee-bd31-524304313885" (UID: "9744a838-2a91-46ee-bd31-524304313885"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.216633 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9744a838-2a91-46ee-bd31-524304313885" (UID: "9744a838-2a91-46ee-bd31-524304313885"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.216894 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9744a838-2a91-46ee-bd31-524304313885-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9744a838-2a91-46ee-bd31-524304313885" (UID: "9744a838-2a91-46ee-bd31-524304313885"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.217150 4981 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9744a838-2a91-46ee-bd31-524304313885-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.217214 4981 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.217233 4981 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.217251 4981 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9744a838-2a91-46ee-bd31-524304313885-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.217161 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9744a838-2a91-46ee-bd31-524304313885-scripts" (OuterVolumeSpecName: "scripts") pod "9744a838-2a91-46ee-bd31-524304313885" (UID: "9744a838-2a91-46ee-bd31-524304313885"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.221986 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9744a838-2a91-46ee-bd31-524304313885-kube-api-access-rclp6" (OuterVolumeSpecName: "kube-api-access-rclp6") pod "9744a838-2a91-46ee-bd31-524304313885" (UID: "9744a838-2a91-46ee-bd31-524304313885"). InnerVolumeSpecName "kube-api-access-rclp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.319052 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rclp6\" (UniqueName: \"kubernetes.io/projected/9744a838-2a91-46ee-bd31-524304313885-kube-api-access-rclp6\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.319115 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9744a838-2a91-46ee-bd31-524304313885-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.769094 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-bnkpb-config-24t6d" event={"ID":"9744a838-2a91-46ee-bd31-524304313885","Type":"ContainerDied","Data":"2142ffbbb06ef8f560d39880ab03984866d6f7489c5fbacaada79eda43f98d77"} Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.769132 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2142ffbbb06ef8f560d39880ab03984866d6f7489c5fbacaada79eda43f98d77" Jan 28 15:21:19 crc kubenswrapper[4981]: I0128 15:21:19.769267 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-bnkpb-config-24t6d" Jan 28 15:21:20 crc kubenswrapper[4981]: I0128 15:21:20.192158 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-bnkpb-config-24t6d"] Jan 28 15:21:20 crc kubenswrapper[4981]: I0128 15:21:20.205224 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-bnkpb" Jan 28 15:21:20 crc kubenswrapper[4981]: I0128 15:21:20.207019 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-bnkpb-config-24t6d"] Jan 28 15:21:20 crc kubenswrapper[4981]: I0128 15:21:20.781083 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"c62c65568f18f6f508576b47f5bf9ac834d31a6d2c49add311731edf45523f64"} Jan 28 15:21:20 crc kubenswrapper[4981]: I0128 15:21:20.781122 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"1050080abddd87dd1b42ae9cfc5a8e9cbb835d9ed24cc2816dc57827e0c32aef"} Jan 28 15:21:20 crc kubenswrapper[4981]: I0128 15:21:20.781130 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"a2946475a1baec45d788ef77ed0a65ae94a8e405860ba8b50fcb91b20bf821b1"} Jan 28 15:21:20 crc kubenswrapper[4981]: I0128 15:21:20.781138 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"f27f46702cd32a6bff09788eff7f4c16e61348f49138bd660fabcd559ec961d6"} Jan 28 15:21:21 crc kubenswrapper[4981]: I0128 15:21:21.327996 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9744a838-2a91-46ee-bd31-524304313885" path="/var/lib/kubelet/pods/9744a838-2a91-46ee-bd31-524304313885/volumes" Jan 28 15:21:29 crc kubenswrapper[4981]: I0128 15:21:29.867162 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"5ab26bb68532506ce62a7b965afa6104598c8c3bfbd469e3e175205459dfb448"} Jan 28 15:21:30 crc kubenswrapper[4981]: I0128 15:21:30.884315 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"4397401afa7d22ae1e61a3c1c4ca0886862cfdf79a81db154d84781ef0211d2f"} Jan 28 15:21:30 crc kubenswrapper[4981]: I0128 15:21:30.917649 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:21:30 crc kubenswrapper[4981]: I0128 15:21:30.933388 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 15:21:31 crc kubenswrapper[4981]: I0128 15:21:31.900023 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"a3c5f4dc-185e-4293-9853-f16cde7997fa","Type":"ContainerStarted","Data":"eaf91bee3e19ba622bd647a4694505135e52c508a634afd2efbf3b8843f4ec39"} Jan 28 15:21:31 crc kubenswrapper[4981]: I0128 15:21:31.939008 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=30.935477298 podStartE2EDuration="36.938989781s" podCreationTimestamp="2026-01-28 15:20:55 +0000 UTC" firstStartedPulling="2026-01-28 15:21:13.946744896 +0000 UTC m=+1085.398903137" lastFinishedPulling="2026-01-28 15:21:19.950257379 +0000 UTC m=+1091.402415620" observedRunningTime="2026-01-28 15:21:31.931994737 +0000 UTC m=+1103.384152978" watchObservedRunningTime="2026-01-28 15:21:31.938989781 +0000 UTC m=+1103.391148022" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.233523 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-k8fts"] Jan 28 15:21:32 crc kubenswrapper[4981]: E0128 15:21:32.234129 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9744a838-2a91-46ee-bd31-524304313885" containerName="ovn-config" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.234267 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="9744a838-2a91-46ee-bd31-524304313885" containerName="ovn-config" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.234595 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="9744a838-2a91-46ee-bd31-524304313885" containerName="ovn-config" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.235913 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.241007 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.244743 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.244840 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.244910 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcksv\" (UniqueName: \"kubernetes.io/projected/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-kube-api-access-jcksv\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.244940 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-config\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.245036 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.245080 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.255180 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-k8fts"] Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.346860 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.346921 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.346999 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcksv\" (UniqueName: \"kubernetes.io/projected/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-kube-api-access-jcksv\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.347016 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-config\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.347070 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.347085 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.348753 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-ovsdbserver-sb\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.348809 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-config\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.348831 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-dns-svc\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.348866 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-dns-swift-storage-0\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.349093 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-ovsdbserver-nb\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.365826 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcksv\" (UniqueName: \"kubernetes.io/projected/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-kube-api-access-jcksv\") pod \"dnsmasq-dns-77585f5f8c-k8fts\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.556455 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.724890 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-h227q"] Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.726615 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h227q" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.736033 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h227q"] Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.753582 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86753537-ca73-42b1-900c-89a238d6bd4e-operator-scripts\") pod \"cinder-db-create-h227q\" (UID: \"86753537-ca73-42b1-900c-89a238d6bd4e\") " pod="openstack/cinder-db-create-h227q" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.753666 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86pp6\" (UniqueName: \"kubernetes.io/projected/86753537-ca73-42b1-900c-89a238d6bd4e-kube-api-access-86pp6\") pod \"cinder-db-create-h227q\" (UID: \"86753537-ca73-42b1-900c-89a238d6bd4e\") " pod="openstack/cinder-db-create-h227q" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.823457 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-84pn6"] Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.824795 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-84pn6" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.832848 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-84pn6"] Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.851073 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-c073-account-create-update-zgk9m"] Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.852313 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c073-account-create-update-zgk9m" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.854489 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.854735 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p4ms\" (UniqueName: \"kubernetes.io/projected/876c9310-5038-4cad-a381-b7d06ecd9fef-kube-api-access-4p4ms\") pod \"barbican-db-create-84pn6\" (UID: \"876c9310-5038-4cad-a381-b7d06ecd9fef\") " pod="openstack/barbican-db-create-84pn6" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.854785 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/876c9310-5038-4cad-a381-b7d06ecd9fef-operator-scripts\") pod \"barbican-db-create-84pn6\" (UID: \"876c9310-5038-4cad-a381-b7d06ecd9fef\") " pod="openstack/barbican-db-create-84pn6" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.854816 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86753537-ca73-42b1-900c-89a238d6bd4e-operator-scripts\") pod \"cinder-db-create-h227q\" (UID: \"86753537-ca73-42b1-900c-89a238d6bd4e\") " pod="openstack/cinder-db-create-h227q" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.854849 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86pp6\" (UniqueName: \"kubernetes.io/projected/86753537-ca73-42b1-900c-89a238d6bd4e-kube-api-access-86pp6\") pod \"cinder-db-create-h227q\" (UID: \"86753537-ca73-42b1-900c-89a238d6bd4e\") " pod="openstack/cinder-db-create-h227q" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.855670 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86753537-ca73-42b1-900c-89a238d6bd4e-operator-scripts\") pod \"cinder-db-create-h227q\" (UID: \"86753537-ca73-42b1-900c-89a238d6bd4e\") " pod="openstack/cinder-db-create-h227q" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.884819 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c073-account-create-update-zgk9m"] Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.899855 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86pp6\" (UniqueName: \"kubernetes.io/projected/86753537-ca73-42b1-900c-89a238d6bd4e-kube-api-access-86pp6\") pod \"cinder-db-create-h227q\" (UID: \"86753537-ca73-42b1-900c-89a238d6bd4e\") " pod="openstack/cinder-db-create-h227q" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.942698 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-f34f-account-create-update-vdhpg"] Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.943669 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f34f-account-create-update-vdhpg" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.945799 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.956456 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skm62\" (UniqueName: \"kubernetes.io/projected/d90db1e4-5229-4543-9037-76dd0f5063eb-kube-api-access-skm62\") pod \"barbican-f34f-account-create-update-vdhpg\" (UID: \"d90db1e4-5229-4543-9037-76dd0f5063eb\") " pod="openstack/barbican-f34f-account-create-update-vdhpg" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.956546 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p4ms\" (UniqueName: \"kubernetes.io/projected/876c9310-5038-4cad-a381-b7d06ecd9fef-kube-api-access-4p4ms\") pod \"barbican-db-create-84pn6\" (UID: \"876c9310-5038-4cad-a381-b7d06ecd9fef\") " pod="openstack/barbican-db-create-84pn6" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.956596 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/876c9310-5038-4cad-a381-b7d06ecd9fef-operator-scripts\") pod \"barbican-db-create-84pn6\" (UID: \"876c9310-5038-4cad-a381-b7d06ecd9fef\") " pod="openstack/barbican-db-create-84pn6" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.956625 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90db1e4-5229-4543-9037-76dd0f5063eb-operator-scripts\") pod \"barbican-f34f-account-create-update-vdhpg\" (UID: \"d90db1e4-5229-4543-9037-76dd0f5063eb\") " pod="openstack/barbican-f34f-account-create-update-vdhpg" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.957671 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/876c9310-5038-4cad-a381-b7d06ecd9fef-operator-scripts\") pod \"barbican-db-create-84pn6\" (UID: \"876c9310-5038-4cad-a381-b7d06ecd9fef\") " pod="openstack/barbican-db-create-84pn6" Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.964249 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f34f-account-create-update-vdhpg"] Jan 28 15:21:32 crc kubenswrapper[4981]: I0128 15:21:32.975881 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p4ms\" (UniqueName: \"kubernetes.io/projected/876c9310-5038-4cad-a381-b7d06ecd9fef-kube-api-access-4p4ms\") pod \"barbican-db-create-84pn6\" (UID: \"876c9310-5038-4cad-a381-b7d06ecd9fef\") " pod="openstack/barbican-db-create-84pn6" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.023104 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-9ngsk"] Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.024137 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9ngsk" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.040669 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9ngsk"] Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.044262 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h227q" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.058149 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90db1e4-5229-4543-9037-76dd0f5063eb-operator-scripts\") pod \"barbican-f34f-account-create-update-vdhpg\" (UID: \"d90db1e4-5229-4543-9037-76dd0f5063eb\") " pod="openstack/barbican-f34f-account-create-update-vdhpg" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.058240 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3-operator-scripts\") pod \"cinder-c073-account-create-update-zgk9m\" (UID: \"dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3\") " pod="openstack/cinder-c073-account-create-update-zgk9m" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.058305 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6cz2\" (UniqueName: \"kubernetes.io/projected/dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3-kube-api-access-j6cz2\") pod \"cinder-c073-account-create-update-zgk9m\" (UID: \"dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3\") " pod="openstack/cinder-c073-account-create-update-zgk9m" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.058339 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skm62\" (UniqueName: \"kubernetes.io/projected/d90db1e4-5229-4543-9037-76dd0f5063eb-kube-api-access-skm62\") pod \"barbican-f34f-account-create-update-vdhpg\" (UID: \"d90db1e4-5229-4543-9037-76dd0f5063eb\") " pod="openstack/barbican-f34f-account-create-update-vdhpg" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.058937 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90db1e4-5229-4543-9037-76dd0f5063eb-operator-scripts\") pod \"barbican-f34f-account-create-update-vdhpg\" (UID: \"d90db1e4-5229-4543-9037-76dd0f5063eb\") " pod="openstack/barbican-f34f-account-create-update-vdhpg" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.094588 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skm62\" (UniqueName: \"kubernetes.io/projected/d90db1e4-5229-4543-9037-76dd0f5063eb-kube-api-access-skm62\") pod \"barbican-f34f-account-create-update-vdhpg\" (UID: \"d90db1e4-5229-4543-9037-76dd0f5063eb\") " pod="openstack/barbican-f34f-account-create-update-vdhpg" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.118525 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-57b4-account-create-update-vmrr8"] Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.120055 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57b4-account-create-update-vmrr8" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.125150 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.148585 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-84pn6" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.160434 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3-operator-scripts\") pod \"cinder-c073-account-create-update-zgk9m\" (UID: \"dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3\") " pod="openstack/cinder-c073-account-create-update-zgk9m" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.161497 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2694\" (UniqueName: \"kubernetes.io/projected/cff680cf-ec0c-41e0-82bf-4d59bbac0238-kube-api-access-m2694\") pod \"neutron-db-create-9ngsk\" (UID: \"cff680cf-ec0c-41e0-82bf-4d59bbac0238\") " pod="openstack/neutron-db-create-9ngsk" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.161759 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6cz2\" (UniqueName: \"kubernetes.io/projected/dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3-kube-api-access-j6cz2\") pod \"cinder-c073-account-create-update-zgk9m\" (UID: \"dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3\") " pod="openstack/cinder-c073-account-create-update-zgk9m" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.162365 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cff680cf-ec0c-41e0-82bf-4d59bbac0238-operator-scripts\") pod \"neutron-db-create-9ngsk\" (UID: \"cff680cf-ec0c-41e0-82bf-4d59bbac0238\") " pod="openstack/neutron-db-create-9ngsk" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.161451 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3-operator-scripts\") pod \"cinder-c073-account-create-update-zgk9m\" (UID: \"dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3\") " pod="openstack/cinder-c073-account-create-update-zgk9m" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.176386 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57b4-account-create-update-vmrr8"] Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.228513 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-r876j"] Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.229456 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r876j" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.232233 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.232249 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.232406 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-28pgk" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.232546 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.238470 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6cz2\" (UniqueName: \"kubernetes.io/projected/dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3-kube-api-access-j6cz2\") pod \"cinder-c073-account-create-update-zgk9m\" (UID: \"dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3\") " pod="openstack/cinder-c073-account-create-update-zgk9m" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.244675 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r876j"] Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.270755 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f34f-account-create-update-vdhpg" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.273831 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2694\" (UniqueName: \"kubernetes.io/projected/cff680cf-ec0c-41e0-82bf-4d59bbac0238-kube-api-access-m2694\") pod \"neutron-db-create-9ngsk\" (UID: \"cff680cf-ec0c-41e0-82bf-4d59bbac0238\") " pod="openstack/neutron-db-create-9ngsk" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.275154 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-config-data\") pod \"keystone-db-sync-r876j\" (UID: \"6cbc9552-255a-40df-ab3c-aed79b3c0b5c\") " pod="openstack/keystone-db-sync-r876j" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.275252 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6pc7\" (UniqueName: \"kubernetes.io/projected/7fa484df-952c-4be2-9edc-b8118029bf2e-kube-api-access-j6pc7\") pod \"neutron-57b4-account-create-update-vmrr8\" (UID: \"7fa484df-952c-4be2-9edc-b8118029bf2e\") " pod="openstack/neutron-57b4-account-create-update-vmrr8" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.275305 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa484df-952c-4be2-9edc-b8118029bf2e-operator-scripts\") pod \"neutron-57b4-account-create-update-vmrr8\" (UID: \"7fa484df-952c-4be2-9edc-b8118029bf2e\") " pod="openstack/neutron-57b4-account-create-update-vmrr8" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.275447 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pzl7\" (UniqueName: \"kubernetes.io/projected/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-kube-api-access-4pzl7\") pod \"keystone-db-sync-r876j\" (UID: \"6cbc9552-255a-40df-ab3c-aed79b3c0b5c\") " pod="openstack/keystone-db-sync-r876j" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.275532 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cff680cf-ec0c-41e0-82bf-4d59bbac0238-operator-scripts\") pod \"neutron-db-create-9ngsk\" (UID: \"cff680cf-ec0c-41e0-82bf-4d59bbac0238\") " pod="openstack/neutron-db-create-9ngsk" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.275601 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-combined-ca-bundle\") pod \"keystone-db-sync-r876j\" (UID: \"6cbc9552-255a-40df-ab3c-aed79b3c0b5c\") " pod="openstack/keystone-db-sync-r876j" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.276835 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cff680cf-ec0c-41e0-82bf-4d59bbac0238-operator-scripts\") pod \"neutron-db-create-9ngsk\" (UID: \"cff680cf-ec0c-41e0-82bf-4d59bbac0238\") " pod="openstack/neutron-db-create-9ngsk" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.300732 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2694\" (UniqueName: \"kubernetes.io/projected/cff680cf-ec0c-41e0-82bf-4d59bbac0238-kube-api-access-m2694\") pod \"neutron-db-create-9ngsk\" (UID: \"cff680cf-ec0c-41e0-82bf-4d59bbac0238\") " pod="openstack/neutron-db-create-9ngsk" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.341342 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9ngsk" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.376789 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-combined-ca-bundle\") pod \"keystone-db-sync-r876j\" (UID: \"6cbc9552-255a-40df-ab3c-aed79b3c0b5c\") " pod="openstack/keystone-db-sync-r876j" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.376943 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-config-data\") pod \"keystone-db-sync-r876j\" (UID: \"6cbc9552-255a-40df-ab3c-aed79b3c0b5c\") " pod="openstack/keystone-db-sync-r876j" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.376993 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6pc7\" (UniqueName: \"kubernetes.io/projected/7fa484df-952c-4be2-9edc-b8118029bf2e-kube-api-access-j6pc7\") pod \"neutron-57b4-account-create-update-vmrr8\" (UID: \"7fa484df-952c-4be2-9edc-b8118029bf2e\") " pod="openstack/neutron-57b4-account-create-update-vmrr8" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.377027 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa484df-952c-4be2-9edc-b8118029bf2e-operator-scripts\") pod \"neutron-57b4-account-create-update-vmrr8\" (UID: \"7fa484df-952c-4be2-9edc-b8118029bf2e\") " pod="openstack/neutron-57b4-account-create-update-vmrr8" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.377097 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pzl7\" (UniqueName: \"kubernetes.io/projected/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-kube-api-access-4pzl7\") pod \"keystone-db-sync-r876j\" (UID: \"6cbc9552-255a-40df-ab3c-aed79b3c0b5c\") " pod="openstack/keystone-db-sync-r876j" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.377882 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa484df-952c-4be2-9edc-b8118029bf2e-operator-scripts\") pod \"neutron-57b4-account-create-update-vmrr8\" (UID: \"7fa484df-952c-4be2-9edc-b8118029bf2e\") " pod="openstack/neutron-57b4-account-create-update-vmrr8" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.381033 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-config-data\") pod \"keystone-db-sync-r876j\" (UID: \"6cbc9552-255a-40df-ab3c-aed79b3c0b5c\") " pod="openstack/keystone-db-sync-r876j" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.382166 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-combined-ca-bundle\") pod \"keystone-db-sync-r876j\" (UID: \"6cbc9552-255a-40df-ab3c-aed79b3c0b5c\") " pod="openstack/keystone-db-sync-r876j" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.394797 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6pc7\" (UniqueName: \"kubernetes.io/projected/7fa484df-952c-4be2-9edc-b8118029bf2e-kube-api-access-j6pc7\") pod \"neutron-57b4-account-create-update-vmrr8\" (UID: \"7fa484df-952c-4be2-9edc-b8118029bf2e\") " pod="openstack/neutron-57b4-account-create-update-vmrr8" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.402582 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pzl7\" (UniqueName: \"kubernetes.io/projected/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-kube-api-access-4pzl7\") pod \"keystone-db-sync-r876j\" (UID: \"6cbc9552-255a-40df-ab3c-aed79b3c0b5c\") " pod="openstack/keystone-db-sync-r876j" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.436826 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57b4-account-create-update-vmrr8" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.468150 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c073-account-create-update-zgk9m" Jan 28 15:21:33 crc kubenswrapper[4981]: I0128 15:21:33.598751 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r876j" Jan 28 15:21:37 crc kubenswrapper[4981]: I0128 15:21:37.799108 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-r876j"] Jan 28 15:21:37 crc kubenswrapper[4981]: W0128 15:21:37.813691 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6cbc9552_255a_40df_ab3c_aed79b3c0b5c.slice/crio-4c775883803f7b39e9bace9e4963009f9cac6932d223ac288c4587594e928b2b WatchSource:0}: Error finding container 4c775883803f7b39e9bace9e4963009f9cac6932d223ac288c4587594e928b2b: Status 404 returned error can't find the container with id 4c775883803f7b39e9bace9e4963009f9cac6932d223ac288c4587594e928b2b Jan 28 15:21:37 crc kubenswrapper[4981]: W0128 15:21:37.817165 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86753537_ca73_42b1_900c_89a238d6bd4e.slice/crio-05f846576a40671570ebaf5decd7a4cdcc37e3323a11d7fda9cd58bc6c65066e WatchSource:0}: Error finding container 05f846576a40671570ebaf5decd7a4cdcc37e3323a11d7fda9cd58bc6c65066e: Status 404 returned error can't find the container with id 05f846576a40671570ebaf5decd7a4cdcc37e3323a11d7fda9cd58bc6c65066e Jan 28 15:21:37 crc kubenswrapper[4981]: I0128 15:21:37.821798 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-h227q"] Jan 28 15:21:37 crc kubenswrapper[4981]: I0128 15:21:37.837499 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-c073-account-create-update-zgk9m"] Jan 28 15:21:37 crc kubenswrapper[4981]: I0128 15:21:37.847031 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-57b4-account-create-update-vmrr8"] Jan 28 15:21:37 crc kubenswrapper[4981]: I0128 15:21:37.852703 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-9ngsk"] Jan 28 15:21:37 crc kubenswrapper[4981]: W0128 15:21:37.866409 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fa484df_952c_4be2_9edc_b8118029bf2e.slice/crio-f3ec6f59116f7cd2bc1aeb04367cf8409527ef62717485e5c550f79beeb4180a WatchSource:0}: Error finding container f3ec6f59116f7cd2bc1aeb04367cf8409527ef62717485e5c550f79beeb4180a: Status 404 returned error can't find the container with id f3ec6f59116f7cd2bc1aeb04367cf8409527ef62717485e5c550f79beeb4180a Jan 28 15:21:37 crc kubenswrapper[4981]: W0128 15:21:37.876668 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcff680cf_ec0c_41e0_82bf_4d59bbac0238.slice/crio-5309574cf27d860b6042b1f28b872e11e8361c69af5c3d6ce307a6a29b98da4f WatchSource:0}: Error finding container 5309574cf27d860b6042b1f28b872e11e8361c69af5c3d6ce307a6a29b98da4f: Status 404 returned error can't find the container with id 5309574cf27d860b6042b1f28b872e11e8361c69af5c3d6ce307a6a29b98da4f Jan 28 15:21:37 crc kubenswrapper[4981]: I0128 15:21:37.951257 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6qpmd" event={"ID":"f5f8f119-df95-4eef-979b-ae7d2cd54f00","Type":"ContainerStarted","Data":"b30e7391f856211a243b6e4e3a702379b89b851ee1790f8c587638b4c77ebc1e"} Jan 28 15:21:37 crc kubenswrapper[4981]: I0128 15:21:37.953741 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9ngsk" event={"ID":"cff680cf-ec0c-41e0-82bf-4d59bbac0238","Type":"ContainerStarted","Data":"5309574cf27d860b6042b1f28b872e11e8361c69af5c3d6ce307a6a29b98da4f"} Jan 28 15:21:37 crc kubenswrapper[4981]: I0128 15:21:37.954581 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57b4-account-create-update-vmrr8" event={"ID":"7fa484df-952c-4be2-9edc-b8118029bf2e","Type":"ContainerStarted","Data":"f3ec6f59116f7cd2bc1aeb04367cf8409527ef62717485e5c550f79beeb4180a"} Jan 28 15:21:37 crc kubenswrapper[4981]: I0128 15:21:37.955691 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r876j" event={"ID":"6cbc9552-255a-40df-ab3c-aed79b3c0b5c","Type":"ContainerStarted","Data":"4c775883803f7b39e9bace9e4963009f9cac6932d223ac288c4587594e928b2b"} Jan 28 15:21:37 crc kubenswrapper[4981]: I0128 15:21:37.956788 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h227q" event={"ID":"86753537-ca73-42b1-900c-89a238d6bd4e","Type":"ContainerStarted","Data":"05f846576a40671570ebaf5decd7a4cdcc37e3323a11d7fda9cd58bc6c65066e"} Jan 28 15:21:37 crc kubenswrapper[4981]: I0128 15:21:37.960737 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c073-account-create-update-zgk9m" event={"ID":"dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3","Type":"ContainerStarted","Data":"4985c5d7c04d86261f055319fa523579f5b9e8953a91232ed23ad9aa3939b122"} Jan 28 15:21:37 crc kubenswrapper[4981]: I0128 15:21:37.977462 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6qpmd" podStartSLOduration=2.109755011 podStartE2EDuration="23.977450398s" podCreationTimestamp="2026-01-28 15:21:14 +0000 UTC" firstStartedPulling="2026-01-28 15:21:15.11072824 +0000 UTC m=+1086.562886481" lastFinishedPulling="2026-01-28 15:21:36.978423607 +0000 UTC m=+1108.430581868" observedRunningTime="2026-01-28 15:21:37.973039302 +0000 UTC m=+1109.425197543" watchObservedRunningTime="2026-01-28 15:21:37.977450398 +0000 UTC m=+1109.429608629" Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.028024 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f34f-account-create-update-vdhpg"] Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.042825 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-k8fts"] Jan 28 15:21:38 crc kubenswrapper[4981]: W0128 15:21:38.048633 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd90db1e4_5229_4543_9037_76dd0f5063eb.slice/crio-e8bda91220901d84db89f433eaa047fb3cfd76ada6cf12da30df4f3eaea64d47 WatchSource:0}: Error finding container e8bda91220901d84db89f433eaa047fb3cfd76ada6cf12da30df4f3eaea64d47: Status 404 returned error can't find the container with id e8bda91220901d84db89f433eaa047fb3cfd76ada6cf12da30df4f3eaea64d47 Jan 28 15:21:38 crc kubenswrapper[4981]: W0128 15:21:38.048969 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod200b2bf9_ee0c_42d2_9307_d6f0868cd3e0.slice/crio-efe0c946197296eca72d0fb2155ffbb0feb1ee4c495692f24c3f5f0dabbb4fc4 WatchSource:0}: Error finding container efe0c946197296eca72d0fb2155ffbb0feb1ee4c495692f24c3f5f0dabbb4fc4: Status 404 returned error can't find the container with id efe0c946197296eca72d0fb2155ffbb0feb1ee4c495692f24c3f5f0dabbb4fc4 Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.051147 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-84pn6"] Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.975869 4981 generic.go:334] "Generic (PLEG): container finished" podID="d90db1e4-5229-4543-9037-76dd0f5063eb" containerID="417fe418f54d8719bba5c35178614922cc1d37cd590b1d7810f18f3052216b2e" exitCode=0 Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.976594 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f34f-account-create-update-vdhpg" event={"ID":"d90db1e4-5229-4543-9037-76dd0f5063eb","Type":"ContainerDied","Data":"417fe418f54d8719bba5c35178614922cc1d37cd590b1d7810f18f3052216b2e"} Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.976625 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f34f-account-create-update-vdhpg" event={"ID":"d90db1e4-5229-4543-9037-76dd0f5063eb","Type":"ContainerStarted","Data":"e8bda91220901d84db89f433eaa047fb3cfd76ada6cf12da30df4f3eaea64d47"} Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.979787 4981 generic.go:334] "Generic (PLEG): container finished" podID="86753537-ca73-42b1-900c-89a238d6bd4e" containerID="8b98d6be7d196c2d777400d70fa95ca66495c80121ea2487450f2cac935d8e2c" exitCode=0 Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.979844 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h227q" event={"ID":"86753537-ca73-42b1-900c-89a238d6bd4e","Type":"ContainerDied","Data":"8b98d6be7d196c2d777400d70fa95ca66495c80121ea2487450f2cac935d8e2c"} Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.983918 4981 generic.go:334] "Generic (PLEG): container finished" podID="dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3" containerID="8817e309f229f5f8e99f9814c9af19a5b4b5b7b7531a53d19ef9d263b00ac3fd" exitCode=0 Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.983989 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c073-account-create-update-zgk9m" event={"ID":"dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3","Type":"ContainerDied","Data":"8817e309f229f5f8e99f9814c9af19a5b4b5b7b7531a53d19ef9d263b00ac3fd"} Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.988385 4981 generic.go:334] "Generic (PLEG): container finished" podID="cff680cf-ec0c-41e0-82bf-4d59bbac0238" containerID="3b2f5bf4e7f9afb32a385fcfd420dea99f5fbf615dd31d49bedae1048471795f" exitCode=0 Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.988483 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9ngsk" event={"ID":"cff680cf-ec0c-41e0-82bf-4d59bbac0238","Type":"ContainerDied","Data":"3b2f5bf4e7f9afb32a385fcfd420dea99f5fbf615dd31d49bedae1048471795f"} Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.994969 4981 generic.go:334] "Generic (PLEG): container finished" podID="876c9310-5038-4cad-a381-b7d06ecd9fef" containerID="ea739a09197cb87415b3ee3fcc9fb6db4e7cb3d9b08f58d8e348919fe05ae972" exitCode=0 Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.995143 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-84pn6" event={"ID":"876c9310-5038-4cad-a381-b7d06ecd9fef","Type":"ContainerDied","Data":"ea739a09197cb87415b3ee3fcc9fb6db4e7cb3d9b08f58d8e348919fe05ae972"} Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.995174 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-84pn6" event={"ID":"876c9310-5038-4cad-a381-b7d06ecd9fef","Type":"ContainerStarted","Data":"9adc2f6017facde3c94b656c325d63eea6ed0ac9036b3829eb8ce9471864e14f"} Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.996937 4981 generic.go:334] "Generic (PLEG): container finished" podID="7fa484df-952c-4be2-9edc-b8118029bf2e" containerID="c3b1e4c28c6434de5d292a70e72d25fd1667c6a5b92b0ad332eaa2239cefd236" exitCode=0 Jan 28 15:21:38 crc kubenswrapper[4981]: I0128 15:21:38.996982 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57b4-account-create-update-vmrr8" event={"ID":"7fa484df-952c-4be2-9edc-b8118029bf2e","Type":"ContainerDied","Data":"c3b1e4c28c6434de5d292a70e72d25fd1667c6a5b92b0ad332eaa2239cefd236"} Jan 28 15:21:39 crc kubenswrapper[4981]: I0128 15:21:39.000438 4981 generic.go:334] "Generic (PLEG): container finished" podID="200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" containerID="93282a6f9f134d74734185bd38f2556ec80ebdf2e5b6861a5bc306e450d45d19" exitCode=0 Jan 28 15:21:39 crc kubenswrapper[4981]: I0128 15:21:39.001509 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" event={"ID":"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0","Type":"ContainerDied","Data":"93282a6f9f134d74734185bd38f2556ec80ebdf2e5b6861a5bc306e450d45d19"} Jan 28 15:21:39 crc kubenswrapper[4981]: I0128 15:21:39.001533 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" event={"ID":"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0","Type":"ContainerStarted","Data":"efe0c946197296eca72d0fb2155ffbb0feb1ee4c495692f24c3f5f0dabbb4fc4"} Jan 28 15:21:40 crc kubenswrapper[4981]: I0128 15:21:40.011624 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" event={"ID":"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0","Type":"ContainerStarted","Data":"ab2a0c3b00b1e84b7c8d81cf457df0ea9d9a59e155067872470da7822793dd35"} Jan 28 15:21:40 crc kubenswrapper[4981]: I0128 15:21:40.037334 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" podStartSLOduration=8.037321022 podStartE2EDuration="8.037321022s" podCreationTimestamp="2026-01-28 15:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:40.034635872 +0000 UTC m=+1111.486794113" watchObservedRunningTime="2026-01-28 15:21:40.037321022 +0000 UTC m=+1111.489479263" Jan 28 15:21:41 crc kubenswrapper[4981]: I0128 15:21:41.022090 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:42 crc kubenswrapper[4981]: I0128 15:21:42.881255 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57b4-account-create-update-vmrr8" Jan 28 15:21:42 crc kubenswrapper[4981]: I0128 15:21:42.886373 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h227q" Jan 28 15:21:42 crc kubenswrapper[4981]: I0128 15:21:42.956787 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c073-account-create-update-zgk9m" Jan 28 15:21:42 crc kubenswrapper[4981]: I0128 15:21:42.963091 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9ngsk" Jan 28 15:21:42 crc kubenswrapper[4981]: I0128 15:21:42.969801 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-84pn6" Jan 28 15:21:42 crc kubenswrapper[4981]: I0128 15:21:42.988914 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f34f-account-create-update-vdhpg" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.043773 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f34f-account-create-update-vdhpg" event={"ID":"d90db1e4-5229-4543-9037-76dd0f5063eb","Type":"ContainerDied","Data":"e8bda91220901d84db89f433eaa047fb3cfd76ada6cf12da30df4f3eaea64d47"} Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.043812 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8bda91220901d84db89f433eaa047fb3cfd76ada6cf12da30df4f3eaea64d47" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.043865 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f34f-account-create-update-vdhpg" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.045023 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r876j" event={"ID":"6cbc9552-255a-40df-ab3c-aed79b3c0b5c","Type":"ContainerStarted","Data":"d9060a931b6c8b9edc866fa50248457b3160fe5ef58caf13abe68b93862aee10"} Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.047481 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-h227q" event={"ID":"86753537-ca73-42b1-900c-89a238d6bd4e","Type":"ContainerDied","Data":"05f846576a40671570ebaf5decd7a4cdcc37e3323a11d7fda9cd58bc6c65066e"} Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.047513 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05f846576a40671570ebaf5decd7a4cdcc37e3323a11d7fda9cd58bc6c65066e" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.047593 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-h227q" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.050842 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-c073-account-create-update-zgk9m" event={"ID":"dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3","Type":"ContainerDied","Data":"4985c5d7c04d86261f055319fa523579f5b9e8953a91232ed23ad9aa3939b122"} Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.050873 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4985c5d7c04d86261f055319fa523579f5b9e8953a91232ed23ad9aa3939b122" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.051031 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-c073-account-create-update-zgk9m" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.053661 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-9ngsk" event={"ID":"cff680cf-ec0c-41e0-82bf-4d59bbac0238","Type":"ContainerDied","Data":"5309574cf27d860b6042b1f28b872e11e8361c69af5c3d6ce307a6a29b98da4f"} Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.053694 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5309574cf27d860b6042b1f28b872e11e8361c69af5c3d6ce307a6a29b98da4f" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.053770 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-9ngsk" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.055872 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-84pn6" event={"ID":"876c9310-5038-4cad-a381-b7d06ecd9fef","Type":"ContainerDied","Data":"9adc2f6017facde3c94b656c325d63eea6ed0ac9036b3829eb8ce9471864e14f"} Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.055890 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9adc2f6017facde3c94b656c325d63eea6ed0ac9036b3829eb8ce9471864e14f" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.055926 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-84pn6" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.056580 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6pc7\" (UniqueName: \"kubernetes.io/projected/7fa484df-952c-4be2-9edc-b8118029bf2e-kube-api-access-j6pc7\") pod \"7fa484df-952c-4be2-9edc-b8118029bf2e\" (UID: \"7fa484df-952c-4be2-9edc-b8118029bf2e\") " Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.056637 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86pp6\" (UniqueName: \"kubernetes.io/projected/86753537-ca73-42b1-900c-89a238d6bd4e-kube-api-access-86pp6\") pod \"86753537-ca73-42b1-900c-89a238d6bd4e\" (UID: \"86753537-ca73-42b1-900c-89a238d6bd4e\") " Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.057123 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6cz2\" (UniqueName: \"kubernetes.io/projected/dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3-kube-api-access-j6cz2\") pod \"dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3\" (UID: \"dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3\") " Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.057175 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa484df-952c-4be2-9edc-b8118029bf2e-operator-scripts\") pod \"7fa484df-952c-4be2-9edc-b8118029bf2e\" (UID: \"7fa484df-952c-4be2-9edc-b8118029bf2e\") " Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.057225 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86753537-ca73-42b1-900c-89a238d6bd4e-operator-scripts\") pod \"86753537-ca73-42b1-900c-89a238d6bd4e\" (UID: \"86753537-ca73-42b1-900c-89a238d6bd4e\") " Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.057250 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skm62\" (UniqueName: \"kubernetes.io/projected/d90db1e4-5229-4543-9037-76dd0f5063eb-kube-api-access-skm62\") pod \"d90db1e4-5229-4543-9037-76dd0f5063eb\" (UID: \"d90db1e4-5229-4543-9037-76dd0f5063eb\") " Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.057288 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3-operator-scripts\") pod \"dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3\" (UID: \"dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3\") " Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.057320 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p4ms\" (UniqueName: \"kubernetes.io/projected/876c9310-5038-4cad-a381-b7d06ecd9fef-kube-api-access-4p4ms\") pod \"876c9310-5038-4cad-a381-b7d06ecd9fef\" (UID: \"876c9310-5038-4cad-a381-b7d06ecd9fef\") " Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.057632 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fa484df-952c-4be2-9edc-b8118029bf2e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fa484df-952c-4be2-9edc-b8118029bf2e" (UID: "7fa484df-952c-4be2-9edc-b8118029bf2e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.057961 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fa484df-952c-4be2-9edc-b8118029bf2e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.059085 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86753537-ca73-42b1-900c-89a238d6bd4e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "86753537-ca73-42b1-900c-89a238d6bd4e" (UID: "86753537-ca73-42b1-900c-89a238d6bd4e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.059173 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-57b4-account-create-update-vmrr8" event={"ID":"7fa484df-952c-4be2-9edc-b8118029bf2e","Type":"ContainerDied","Data":"f3ec6f59116f7cd2bc1aeb04367cf8409527ef62717485e5c550f79beeb4180a"} Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.059219 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3ec6f59116f7cd2bc1aeb04367cf8409527ef62717485e5c550f79beeb4180a" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.059245 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-57b4-account-create-update-vmrr8" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.059173 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3" (UID: "dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.061871 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d90db1e4-5229-4543-9037-76dd0f5063eb-kube-api-access-skm62" (OuterVolumeSpecName: "kube-api-access-skm62") pod "d90db1e4-5229-4543-9037-76dd0f5063eb" (UID: "d90db1e4-5229-4543-9037-76dd0f5063eb"). InnerVolumeSpecName "kube-api-access-skm62". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.062441 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86753537-ca73-42b1-900c-89a238d6bd4e-kube-api-access-86pp6" (OuterVolumeSpecName: "kube-api-access-86pp6") pod "86753537-ca73-42b1-900c-89a238d6bd4e" (UID: "86753537-ca73-42b1-900c-89a238d6bd4e"). InnerVolumeSpecName "kube-api-access-86pp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.064814 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3-kube-api-access-j6cz2" (OuterVolumeSpecName: "kube-api-access-j6cz2") pod "dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3" (UID: "dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3"). InnerVolumeSpecName "kube-api-access-j6cz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.065270 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/876c9310-5038-4cad-a381-b7d06ecd9fef-kube-api-access-4p4ms" (OuterVolumeSpecName: "kube-api-access-4p4ms") pod "876c9310-5038-4cad-a381-b7d06ecd9fef" (UID: "876c9310-5038-4cad-a381-b7d06ecd9fef"). InnerVolumeSpecName "kube-api-access-4p4ms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.072874 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-r876j" podStartSLOduration=5.1644831270000005 podStartE2EDuration="10.072850854s" podCreationTimestamp="2026-01-28 15:21:33 +0000 UTC" firstStartedPulling="2026-01-28 15:21:37.817162991 +0000 UTC m=+1109.269321232" lastFinishedPulling="2026-01-28 15:21:42.725530708 +0000 UTC m=+1114.177688959" observedRunningTime="2026-01-28 15:21:43.070844221 +0000 UTC m=+1114.523002482" watchObservedRunningTime="2026-01-28 15:21:43.072850854 +0000 UTC m=+1114.525009095" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.092580 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fa484df-952c-4be2-9edc-b8118029bf2e-kube-api-access-j6pc7" (OuterVolumeSpecName: "kube-api-access-j6pc7") pod "7fa484df-952c-4be2-9edc-b8118029bf2e" (UID: "7fa484df-952c-4be2-9edc-b8118029bf2e"). InnerVolumeSpecName "kube-api-access-j6pc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.158234 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2694\" (UniqueName: \"kubernetes.io/projected/cff680cf-ec0c-41e0-82bf-4d59bbac0238-kube-api-access-m2694\") pod \"cff680cf-ec0c-41e0-82bf-4d59bbac0238\" (UID: \"cff680cf-ec0c-41e0-82bf-4d59bbac0238\") " Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.158275 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90db1e4-5229-4543-9037-76dd0f5063eb-operator-scripts\") pod \"d90db1e4-5229-4543-9037-76dd0f5063eb\" (UID: \"d90db1e4-5229-4543-9037-76dd0f5063eb\") " Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.158305 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/876c9310-5038-4cad-a381-b7d06ecd9fef-operator-scripts\") pod \"876c9310-5038-4cad-a381-b7d06ecd9fef\" (UID: \"876c9310-5038-4cad-a381-b7d06ecd9fef\") " Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.158328 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cff680cf-ec0c-41e0-82bf-4d59bbac0238-operator-scripts\") pod \"cff680cf-ec0c-41e0-82bf-4d59bbac0238\" (UID: \"cff680cf-ec0c-41e0-82bf-4d59bbac0238\") " Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.158582 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.158594 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p4ms\" (UniqueName: \"kubernetes.io/projected/876c9310-5038-4cad-a381-b7d06ecd9fef-kube-api-access-4p4ms\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.158604 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6pc7\" (UniqueName: \"kubernetes.io/projected/7fa484df-952c-4be2-9edc-b8118029bf2e-kube-api-access-j6pc7\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.158612 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86pp6\" (UniqueName: \"kubernetes.io/projected/86753537-ca73-42b1-900c-89a238d6bd4e-kube-api-access-86pp6\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.158620 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6cz2\" (UniqueName: \"kubernetes.io/projected/dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3-kube-api-access-j6cz2\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.158629 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/86753537-ca73-42b1-900c-89a238d6bd4e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.158637 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skm62\" (UniqueName: \"kubernetes.io/projected/d90db1e4-5229-4543-9037-76dd0f5063eb-kube-api-access-skm62\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.159362 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d90db1e4-5229-4543-9037-76dd0f5063eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d90db1e4-5229-4543-9037-76dd0f5063eb" (UID: "d90db1e4-5229-4543-9037-76dd0f5063eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.159379 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/876c9310-5038-4cad-a381-b7d06ecd9fef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "876c9310-5038-4cad-a381-b7d06ecd9fef" (UID: "876c9310-5038-4cad-a381-b7d06ecd9fef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.159518 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cff680cf-ec0c-41e0-82bf-4d59bbac0238-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cff680cf-ec0c-41e0-82bf-4d59bbac0238" (UID: "cff680cf-ec0c-41e0-82bf-4d59bbac0238"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.163452 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cff680cf-ec0c-41e0-82bf-4d59bbac0238-kube-api-access-m2694" (OuterVolumeSpecName: "kube-api-access-m2694") pod "cff680cf-ec0c-41e0-82bf-4d59bbac0238" (UID: "cff680cf-ec0c-41e0-82bf-4d59bbac0238"). InnerVolumeSpecName "kube-api-access-m2694". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.260274 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2694\" (UniqueName: \"kubernetes.io/projected/cff680cf-ec0c-41e0-82bf-4d59bbac0238-kube-api-access-m2694\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.260419 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d90db1e4-5229-4543-9037-76dd0f5063eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.260442 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/876c9310-5038-4cad-a381-b7d06ecd9fef-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:43 crc kubenswrapper[4981]: I0128 15:21:43.260460 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cff680cf-ec0c-41e0-82bf-4d59bbac0238-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:47 crc kubenswrapper[4981]: I0128 15:21:47.101220 4981 generic.go:334] "Generic (PLEG): container finished" podID="f5f8f119-df95-4eef-979b-ae7d2cd54f00" containerID="b30e7391f856211a243b6e4e3a702379b89b851ee1790f8c587638b4c77ebc1e" exitCode=0 Jan 28 15:21:47 crc kubenswrapper[4981]: I0128 15:21:47.101345 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6qpmd" event={"ID":"f5f8f119-df95-4eef-979b-ae7d2cd54f00","Type":"ContainerDied","Data":"b30e7391f856211a243b6e4e3a702379b89b851ee1790f8c587638b4c77ebc1e"} Jan 28 15:21:47 crc kubenswrapper[4981]: I0128 15:21:47.105620 4981 generic.go:334] "Generic (PLEG): container finished" podID="6cbc9552-255a-40df-ab3c-aed79b3c0b5c" containerID="d9060a931b6c8b9edc866fa50248457b3160fe5ef58caf13abe68b93862aee10" exitCode=0 Jan 28 15:21:47 crc kubenswrapper[4981]: I0128 15:21:47.105691 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r876j" event={"ID":"6cbc9552-255a-40df-ab3c-aed79b3c0b5c","Type":"ContainerDied","Data":"d9060a931b6c8b9edc866fa50248457b3160fe5ef58caf13abe68b93862aee10"} Jan 28 15:21:47 crc kubenswrapper[4981]: I0128 15:21:47.558489 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:21:47 crc kubenswrapper[4981]: I0128 15:21:47.642274 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-n4hbg"] Jan 28 15:21:47 crc kubenswrapper[4981]: I0128 15:21:47.643161 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-n4hbg" podUID="c4d13dbf-51e4-47d3-8fa9-2abb69b3b270" containerName="dnsmasq-dns" containerID="cri-o://1a9d68e47df47aca977d37e16e7a8384eebe4debcf6ef16547b4e97cbfb54064" gracePeriod=10 Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.055076 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.114582 4981 generic.go:334] "Generic (PLEG): container finished" podID="c4d13dbf-51e4-47d3-8fa9-2abb69b3b270" containerID="1a9d68e47df47aca977d37e16e7a8384eebe4debcf6ef16547b4e97cbfb54064" exitCode=0 Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.114610 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-n4hbg" event={"ID":"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270","Type":"ContainerDied","Data":"1a9d68e47df47aca977d37e16e7a8384eebe4debcf6ef16547b4e97cbfb54064"} Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.114647 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-n4hbg" event={"ID":"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270","Type":"ContainerDied","Data":"5dcf39f8968ed448de0715dd2e9da530a4c782fb4bd340735d8e87f1c4a499bd"} Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.114666 4981 scope.go:117] "RemoveContainer" containerID="1a9d68e47df47aca977d37e16e7a8384eebe4debcf6ef16547b4e97cbfb54064" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.114695 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-n4hbg" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.149050 4981 scope.go:117] "RemoveContainer" containerID="f108debd7f7c6960822eb34ed1519740ad8e96f93a15145c017b97b0537ab391" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.177789 4981 scope.go:117] "RemoveContainer" containerID="1a9d68e47df47aca977d37e16e7a8384eebe4debcf6ef16547b4e97cbfb54064" Jan 28 15:21:48 crc kubenswrapper[4981]: E0128 15:21:48.178497 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a9d68e47df47aca977d37e16e7a8384eebe4debcf6ef16547b4e97cbfb54064\": container with ID starting with 1a9d68e47df47aca977d37e16e7a8384eebe4debcf6ef16547b4e97cbfb54064 not found: ID does not exist" containerID="1a9d68e47df47aca977d37e16e7a8384eebe4debcf6ef16547b4e97cbfb54064" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.178761 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a9d68e47df47aca977d37e16e7a8384eebe4debcf6ef16547b4e97cbfb54064"} err="failed to get container status \"1a9d68e47df47aca977d37e16e7a8384eebe4debcf6ef16547b4e97cbfb54064\": rpc error: code = NotFound desc = could not find container \"1a9d68e47df47aca977d37e16e7a8384eebe4debcf6ef16547b4e97cbfb54064\": container with ID starting with 1a9d68e47df47aca977d37e16e7a8384eebe4debcf6ef16547b4e97cbfb54064 not found: ID does not exist" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.178794 4981 scope.go:117] "RemoveContainer" containerID="f108debd7f7c6960822eb34ed1519740ad8e96f93a15145c017b97b0537ab391" Jan 28 15:21:48 crc kubenswrapper[4981]: E0128 15:21:48.179554 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f108debd7f7c6960822eb34ed1519740ad8e96f93a15145c017b97b0537ab391\": container with ID starting with f108debd7f7c6960822eb34ed1519740ad8e96f93a15145c017b97b0537ab391 not found: ID does not exist" containerID="f108debd7f7c6960822eb34ed1519740ad8e96f93a15145c017b97b0537ab391" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.179577 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f108debd7f7c6960822eb34ed1519740ad8e96f93a15145c017b97b0537ab391"} err="failed to get container status \"f108debd7f7c6960822eb34ed1519740ad8e96f93a15145c017b97b0537ab391\": rpc error: code = NotFound desc = could not find container \"f108debd7f7c6960822eb34ed1519740ad8e96f93a15145c017b97b0537ab391\": container with ID starting with f108debd7f7c6960822eb34ed1519740ad8e96f93a15145c017b97b0537ab391 not found: ID does not exist" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.252359 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-ovsdbserver-nb\") pod \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.252417 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-dns-svc\") pod \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.252472 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-config\") pod \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.252491 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-ovsdbserver-sb\") pod \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.252529 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvm4z\" (UniqueName: \"kubernetes.io/projected/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-kube-api-access-jvm4z\") pod \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\" (UID: \"c4d13dbf-51e4-47d3-8fa9-2abb69b3b270\") " Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.280829 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-kube-api-access-jvm4z" (OuterVolumeSpecName: "kube-api-access-jvm4z") pod "c4d13dbf-51e4-47d3-8fa9-2abb69b3b270" (UID: "c4d13dbf-51e4-47d3-8fa9-2abb69b3b270"). InnerVolumeSpecName "kube-api-access-jvm4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.317092 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4d13dbf-51e4-47d3-8fa9-2abb69b3b270" (UID: "c4d13dbf-51e4-47d3-8fa9-2abb69b3b270"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.319088 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-config" (OuterVolumeSpecName: "config") pod "c4d13dbf-51e4-47d3-8fa9-2abb69b3b270" (UID: "c4d13dbf-51e4-47d3-8fa9-2abb69b3b270"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.322694 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c4d13dbf-51e4-47d3-8fa9-2abb69b3b270" (UID: "c4d13dbf-51e4-47d3-8fa9-2abb69b3b270"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.341787 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c4d13dbf-51e4-47d3-8fa9-2abb69b3b270" (UID: "c4d13dbf-51e4-47d3-8fa9-2abb69b3b270"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.357411 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.357430 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.357440 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvm4z\" (UniqueName: \"kubernetes.io/projected/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-kube-api-access-jvm4z\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.357452 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.357459 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.359364 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r876j" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.416449 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.453219 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-n4hbg"] Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.463681 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-n4hbg"] Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.560233 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-db-sync-config-data\") pod \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.560991 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-config-data\") pod \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.561233 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6g8v\" (UniqueName: \"kubernetes.io/projected/f5f8f119-df95-4eef-979b-ae7d2cd54f00-kube-api-access-w6g8v\") pod \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.561276 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pzl7\" (UniqueName: \"kubernetes.io/projected/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-kube-api-access-4pzl7\") pod \"6cbc9552-255a-40df-ab3c-aed79b3c0b5c\" (UID: \"6cbc9552-255a-40df-ab3c-aed79b3c0b5c\") " Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.561340 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-combined-ca-bundle\") pod \"6cbc9552-255a-40df-ab3c-aed79b3c0b5c\" (UID: \"6cbc9552-255a-40df-ab3c-aed79b3c0b5c\") " Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.561450 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-config-data\") pod \"6cbc9552-255a-40df-ab3c-aed79b3c0b5c\" (UID: \"6cbc9552-255a-40df-ab3c-aed79b3c0b5c\") " Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.561506 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-combined-ca-bundle\") pod \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\" (UID: \"f5f8f119-df95-4eef-979b-ae7d2cd54f00\") " Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.565134 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5f8f119-df95-4eef-979b-ae7d2cd54f00-kube-api-access-w6g8v" (OuterVolumeSpecName: "kube-api-access-w6g8v") pod "f5f8f119-df95-4eef-979b-ae7d2cd54f00" (UID: "f5f8f119-df95-4eef-979b-ae7d2cd54f00"). InnerVolumeSpecName "kube-api-access-w6g8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.565333 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "f5f8f119-df95-4eef-979b-ae7d2cd54f00" (UID: "f5f8f119-df95-4eef-979b-ae7d2cd54f00"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.566394 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-kube-api-access-4pzl7" (OuterVolumeSpecName: "kube-api-access-4pzl7") pod "6cbc9552-255a-40df-ab3c-aed79b3c0b5c" (UID: "6cbc9552-255a-40df-ab3c-aed79b3c0b5c"). InnerVolumeSpecName "kube-api-access-4pzl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.585922 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5f8f119-df95-4eef-979b-ae7d2cd54f00" (UID: "f5f8f119-df95-4eef-979b-ae7d2cd54f00"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.604967 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-config-data" (OuterVolumeSpecName: "config-data") pod "f5f8f119-df95-4eef-979b-ae7d2cd54f00" (UID: "f5f8f119-df95-4eef-979b-ae7d2cd54f00"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.605995 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cbc9552-255a-40df-ab3c-aed79b3c0b5c" (UID: "6cbc9552-255a-40df-ab3c-aed79b3c0b5c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.619140 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-config-data" (OuterVolumeSpecName: "config-data") pod "6cbc9552-255a-40df-ab3c-aed79b3c0b5c" (UID: "6cbc9552-255a-40df-ab3c-aed79b3c0b5c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.663808 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6g8v\" (UniqueName: \"kubernetes.io/projected/f5f8f119-df95-4eef-979b-ae7d2cd54f00-kube-api-access-w6g8v\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.663841 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pzl7\" (UniqueName: \"kubernetes.io/projected/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-kube-api-access-4pzl7\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.663852 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.663861 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cbc9552-255a-40df-ab3c-aed79b3c0b5c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.663870 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.663877 4981 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:48 crc kubenswrapper[4981]: I0128 15:21:48.663886 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5f8f119-df95-4eef-979b-ae7d2cd54f00-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.127214 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6qpmd" event={"ID":"f5f8f119-df95-4eef-979b-ae7d2cd54f00","Type":"ContainerDied","Data":"389840b1f95478de110c38dde5ab8fe82ffd0b601df1128e6843108913ed22f6"} Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.127555 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="389840b1f95478de110c38dde5ab8fe82ffd0b601df1128e6843108913ed22f6" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.127267 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6qpmd" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.129484 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-r876j" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.129500 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-r876j" event={"ID":"6cbc9552-255a-40df-ab3c-aed79b3c0b5c","Type":"ContainerDied","Data":"4c775883803f7b39e9bace9e4963009f9cac6932d223ac288c4587594e928b2b"} Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.129535 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c775883803f7b39e9bace9e4963009f9cac6932d223ac288c4587594e928b2b" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.346282 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4d13dbf-51e4-47d3-8fa9-2abb69b3b270" path="/var/lib/kubelet/pods/c4d13dbf-51e4-47d3-8fa9-2abb69b3b270/volumes" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.671065 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-7xzlx"] Jan 28 15:21:49 crc kubenswrapper[4981]: E0128 15:21:49.671744 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d13dbf-51e4-47d3-8fa9-2abb69b3b270" containerName="init" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.671826 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d13dbf-51e4-47d3-8fa9-2abb69b3b270" containerName="init" Jan 28 15:21:49 crc kubenswrapper[4981]: E0128 15:21:49.671908 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3" containerName="mariadb-account-create-update" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.671962 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3" containerName="mariadb-account-create-update" Jan 28 15:21:49 crc kubenswrapper[4981]: E0128 15:21:49.672029 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90db1e4-5229-4543-9037-76dd0f5063eb" containerName="mariadb-account-create-update" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.672109 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90db1e4-5229-4543-9037-76dd0f5063eb" containerName="mariadb-account-create-update" Jan 28 15:21:49 crc kubenswrapper[4981]: E0128 15:21:49.672244 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="876c9310-5038-4cad-a381-b7d06ecd9fef" containerName="mariadb-database-create" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.672319 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="876c9310-5038-4cad-a381-b7d06ecd9fef" containerName="mariadb-database-create" Jan 28 15:21:49 crc kubenswrapper[4981]: E0128 15:21:49.672382 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cbc9552-255a-40df-ab3c-aed79b3c0b5c" containerName="keystone-db-sync" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.672434 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cbc9552-255a-40df-ab3c-aed79b3c0b5c" containerName="keystone-db-sync" Jan 28 15:21:49 crc kubenswrapper[4981]: E0128 15:21:49.672492 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fa484df-952c-4be2-9edc-b8118029bf2e" containerName="mariadb-account-create-update" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.672543 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fa484df-952c-4be2-9edc-b8118029bf2e" containerName="mariadb-account-create-update" Jan 28 15:21:49 crc kubenswrapper[4981]: E0128 15:21:49.672607 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5f8f119-df95-4eef-979b-ae7d2cd54f00" containerName="glance-db-sync" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.672708 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5f8f119-df95-4eef-979b-ae7d2cd54f00" containerName="glance-db-sync" Jan 28 15:21:49 crc kubenswrapper[4981]: E0128 15:21:49.672924 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86753537-ca73-42b1-900c-89a238d6bd4e" containerName="mariadb-database-create" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.673004 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="86753537-ca73-42b1-900c-89a238d6bd4e" containerName="mariadb-database-create" Jan 28 15:21:49 crc kubenswrapper[4981]: E0128 15:21:49.673075 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cff680cf-ec0c-41e0-82bf-4d59bbac0238" containerName="mariadb-database-create" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.673139 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cff680cf-ec0c-41e0-82bf-4d59bbac0238" containerName="mariadb-database-create" Jan 28 15:21:49 crc kubenswrapper[4981]: E0128 15:21:49.673206 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4d13dbf-51e4-47d3-8fa9-2abb69b3b270" containerName="dnsmasq-dns" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.673274 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4d13dbf-51e4-47d3-8fa9-2abb69b3b270" containerName="dnsmasq-dns" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.673524 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4d13dbf-51e4-47d3-8fa9-2abb69b3b270" containerName="dnsmasq-dns" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.673613 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3" containerName="mariadb-account-create-update" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.673688 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cbc9552-255a-40df-ab3c-aed79b3c0b5c" containerName="keystone-db-sync" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.673758 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fa484df-952c-4be2-9edc-b8118029bf2e" containerName="mariadb-account-create-update" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.673825 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5f8f119-df95-4eef-979b-ae7d2cd54f00" containerName="glance-db-sync" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.673887 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cff680cf-ec0c-41e0-82bf-4d59bbac0238" containerName="mariadb-database-create" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.673956 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="d90db1e4-5229-4543-9037-76dd0f5063eb" containerName="mariadb-account-create-update" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.674028 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="876c9310-5038-4cad-a381-b7d06ecd9fef" containerName="mariadb-database-create" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.674096 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="86753537-ca73-42b1-900c-89a238d6bd4e" containerName="mariadb-database-create" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.674973 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.677081 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-7xzlx"] Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.689069 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.689167 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-dns-svc\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.689204 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.689236 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpbvs\" (UniqueName: \"kubernetes.io/projected/5b8f224d-ac44-43c1-bf59-d16858e364ec-kube-api-access-gpbvs\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.689271 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.689305 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-config\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.725266 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kcxvv"] Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.726517 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.730057 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.730252 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-28pgk" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.730396 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.730530 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.734385 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.757907 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kcxvv"] Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.790570 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-scripts\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.790621 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-config\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.790642 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-combined-ca-bundle\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.790671 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-fernet-keys\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.790697 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.790735 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj97z\" (UniqueName: \"kubernetes.io/projected/98b74a60-bac3-481c-bb29-48e58ee43fed-kube-api-access-pj97z\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.790753 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-credential-keys\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.790791 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-dns-svc\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.790809 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.790827 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpbvs\" (UniqueName: \"kubernetes.io/projected/5b8f224d-ac44-43c1-bf59-d16858e364ec-kube-api-access-gpbvs\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.790844 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-config-data\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.790869 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.791658 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-dns-swift-storage-0\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.791981 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-config\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.792241 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-dns-svc\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.792690 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-ovsdbserver-nb\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.792858 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-ovsdbserver-sb\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.840844 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpbvs\" (UniqueName: \"kubernetes.io/projected/5b8f224d-ac44-43c1-bf59-d16858e364ec-kube-api-access-gpbvs\") pod \"dnsmasq-dns-55fff446b9-7xzlx\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.892984 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-scripts\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.893040 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-combined-ca-bundle\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.893073 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-fernet-keys\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.893130 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj97z\" (UniqueName: \"kubernetes.io/projected/98b74a60-bac3-481c-bb29-48e58ee43fed-kube-api-access-pj97z\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.893149 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-credential-keys\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.893212 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-config-data\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.909297 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-dd4f7ddbc-vtfnj"] Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.910923 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.914288 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-config-data\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.916772 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-fernet-keys\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.916836 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-h9wgp"] Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.917954 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-h9wgp" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.919457 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-credential-keys\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.921570 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-scripts\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.923152 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-combined-ca-bundle\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.926755 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj97z\" (UniqueName: \"kubernetes.io/projected/98b74a60-bac3-481c-bb29-48e58ee43fed-kube-api-access-pj97z\") pod \"keystone-bootstrap-kcxvv\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.931744 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-h8htg"] Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.932876 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.939261 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-h9wgp"] Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.948035 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.948309 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.948452 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-2sb4z" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.948598 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.948819 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.948960 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.949538 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9qvbr" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.949737 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.949847 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.949965 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zxwcf" Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.963685 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-h8htg"] Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.983861 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-dd4f7ddbc-vtfnj"] Jan 28 15:21:49 crc kubenswrapper[4981]: I0128 15:21:49.996045 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.003115 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.004986 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.017242 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.017954 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.040869 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-7xzlx"] Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.063238 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.085498 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.095672 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m"] Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.097001 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102256 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/150ae7b2-4b64-48c7-86b3-71d7841afba3-combined-ca-bundle\") pod \"neutron-db-sync-h9wgp\" (UID: \"150ae7b2-4b64-48c7-86b3-71d7841afba3\") " pod="openstack/neutron-db-sync-h9wgp" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102295 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-scripts\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102323 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/464cfecb-1d69-466f-90f1-9d9ac1166673-logs\") pod \"horizon-dd4f7ddbc-vtfnj\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102344 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl2c9\" (UniqueName: \"kubernetes.io/projected/464cfecb-1d69-466f-90f1-9d9ac1166673-kube-api-access-sl2c9\") pod \"horizon-dd4f7ddbc-vtfnj\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102368 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/464cfecb-1d69-466f-90f1-9d9ac1166673-scripts\") pod \"horizon-dd4f7ddbc-vtfnj\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102385 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4xdk\" (UniqueName: \"kubernetes.io/projected/150ae7b2-4b64-48c7-86b3-71d7841afba3-kube-api-access-g4xdk\") pod \"neutron-db-sync-h9wgp\" (UID: \"150ae7b2-4b64-48c7-86b3-71d7841afba3\") " pod="openstack/neutron-db-sync-h9wgp" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102407 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-scripts\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102427 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102445 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-config-data\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102463 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-config-data\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102484 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rbls\" (UniqueName: \"kubernetes.io/projected/5a747315-c181-4459-ae1d-3c0c5252efb7-kube-api-access-4rbls\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102537 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-db-sync-config-data\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102558 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102581 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vftnm\" (UniqueName: \"kubernetes.io/projected/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-kube-api-access-vftnm\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102602 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-combined-ca-bundle\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102632 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pst9\" (UniqueName: \"kubernetes.io/projected/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-kube-api-access-9pst9\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102649 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102670 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102687 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-config\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102704 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102725 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/464cfecb-1d69-466f-90f1-9d9ac1166673-horizon-secret-key\") pod \"horizon-dd4f7ddbc-vtfnj\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102749 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102780 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-run-httpd\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102800 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-log-httpd\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102817 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/464cfecb-1d69-466f-90f1-9d9ac1166673-config-data\") pod \"horizon-dd4f7ddbc-vtfnj\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102836 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a747315-c181-4459-ae1d-3c0c5252efb7-etc-machine-id\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.102868 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/150ae7b2-4b64-48c7-86b3-71d7841afba3-config\") pod \"neutron-db-sync-h9wgp\" (UID: \"150ae7b2-4b64-48c7-86b3-71d7841afba3\") " pod="openstack/neutron-db-sync-h9wgp" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.124043 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-jd49r"] Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.125059 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jd49r" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.131428 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-t6ht4"] Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.132749 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-7x9vx" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.132990 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.133282 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.139699 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.139983 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-r5qwq" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.140117 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.165529 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t6ht4"] Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.173722 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jd49r"] Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.199265 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m"] Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.203764 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.203804 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82101008-6112-4a68-8776-7a2c896b5eab-combined-ca-bundle\") pod \"barbican-db-sync-jd49r\" (UID: \"82101008-6112-4a68-8776-7a2c896b5eab\") " pod="openstack/barbican-db-sync-jd49r" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.203832 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-run-httpd\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.203847 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-scripts\") pod \"placement-db-sync-t6ht4\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.203866 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/464cfecb-1d69-466f-90f1-9d9ac1166673-config-data\") pod \"horizon-dd4f7ddbc-vtfnj\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.203878 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-log-httpd\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.203894 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a747315-c181-4459-ae1d-3c0c5252efb7-etc-machine-id\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.203911 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-config-data\") pod \"placement-db-sync-t6ht4\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.203927 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/150ae7b2-4b64-48c7-86b3-71d7841afba3-config\") pod \"neutron-db-sync-h9wgp\" (UID: \"150ae7b2-4b64-48c7-86b3-71d7841afba3\") " pod="openstack/neutron-db-sync-h9wgp" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.203954 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs4hx\" (UniqueName: \"kubernetes.io/projected/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-kube-api-access-zs4hx\") pod \"placement-db-sync-t6ht4\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.203972 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/150ae7b2-4b64-48c7-86b3-71d7841afba3-combined-ca-bundle\") pod \"neutron-db-sync-h9wgp\" (UID: \"150ae7b2-4b64-48c7-86b3-71d7841afba3\") " pod="openstack/neutron-db-sync-h9wgp" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.203985 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-scripts\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204007 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/464cfecb-1d69-466f-90f1-9d9ac1166673-logs\") pod \"horizon-dd4f7ddbc-vtfnj\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204023 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl2c9\" (UniqueName: \"kubernetes.io/projected/464cfecb-1d69-466f-90f1-9d9ac1166673-kube-api-access-sl2c9\") pod \"horizon-dd4f7ddbc-vtfnj\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204040 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/464cfecb-1d69-466f-90f1-9d9ac1166673-scripts\") pod \"horizon-dd4f7ddbc-vtfnj\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204057 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4xdk\" (UniqueName: \"kubernetes.io/projected/150ae7b2-4b64-48c7-86b3-71d7841afba3-kube-api-access-g4xdk\") pod \"neutron-db-sync-h9wgp\" (UID: \"150ae7b2-4b64-48c7-86b3-71d7841afba3\") " pod="openstack/neutron-db-sync-h9wgp" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204072 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-scripts\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204086 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95cw8\" (UniqueName: \"kubernetes.io/projected/82101008-6112-4a68-8776-7a2c896b5eab-kube-api-access-95cw8\") pod \"barbican-db-sync-jd49r\" (UID: \"82101008-6112-4a68-8776-7a2c896b5eab\") " pod="openstack/barbican-db-sync-jd49r" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204103 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204120 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-config-data\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204135 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-config-data\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204152 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rbls\" (UniqueName: \"kubernetes.io/projected/5a747315-c181-4459-ae1d-3c0c5252efb7-kube-api-access-4rbls\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204197 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-combined-ca-bundle\") pod \"placement-db-sync-t6ht4\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204221 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-db-sync-config-data\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204237 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204257 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vftnm\" (UniqueName: \"kubernetes.io/projected/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-kube-api-access-vftnm\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204278 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-combined-ca-bundle\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204301 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82101008-6112-4a68-8776-7a2c896b5eab-db-sync-config-data\") pod \"barbican-db-sync-jd49r\" (UID: \"82101008-6112-4a68-8776-7a2c896b5eab\") " pod="openstack/barbican-db-sync-jd49r" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204321 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pst9\" (UniqueName: \"kubernetes.io/projected/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-kube-api-access-9pst9\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204337 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204354 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204371 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-config\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204386 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204403 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-logs\") pod \"placement-db-sync-t6ht4\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.204420 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/464cfecb-1d69-466f-90f1-9d9ac1166673-horizon-secret-key\") pod \"horizon-dd4f7ddbc-vtfnj\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.210608 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-run-httpd\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.214384 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/464cfecb-1d69-466f-90f1-9d9ac1166673-config-data\") pod \"horizon-dd4f7ddbc-vtfnj\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.215474 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/150ae7b2-4b64-48c7-86b3-71d7841afba3-combined-ca-bundle\") pod \"neutron-db-sync-h9wgp\" (UID: \"150ae7b2-4b64-48c7-86b3-71d7841afba3\") " pod="openstack/neutron-db-sync-h9wgp" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.215480 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.215772 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-log-httpd\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.215818 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a747315-c181-4459-ae1d-3c0c5252efb7-etc-machine-id\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.217225 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/464cfecb-1d69-466f-90f1-9d9ac1166673-horizon-secret-key\") pod \"horizon-dd4f7ddbc-vtfnj\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.217975 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/464cfecb-1d69-466f-90f1-9d9ac1166673-scripts\") pod \"horizon-dd4f7ddbc-vtfnj\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.222811 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-dns-swift-storage-0\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.223763 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-dns-svc\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.228892 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/464cfecb-1d69-466f-90f1-9d9ac1166673-logs\") pod \"horizon-dd4f7ddbc-vtfnj\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.230444 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-ovsdbserver-nb\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.231474 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-ovsdbserver-sb\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.232471 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-config\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.236599 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-scripts\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.237078 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-scripts\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.254942 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/150ae7b2-4b64-48c7-86b3-71d7841afba3-config\") pod \"neutron-db-sync-h9wgp\" (UID: \"150ae7b2-4b64-48c7-86b3-71d7841afba3\") " pod="openstack/neutron-db-sync-h9wgp" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.261000 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.261292 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-db-sync-config-data\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.261544 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4xdk\" (UniqueName: \"kubernetes.io/projected/150ae7b2-4b64-48c7-86b3-71d7841afba3-kube-api-access-g4xdk\") pod \"neutron-db-sync-h9wgp\" (UID: \"150ae7b2-4b64-48c7-86b3-71d7841afba3\") " pod="openstack/neutron-db-sync-h9wgp" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.262023 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-combined-ca-bundle\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.262110 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-config-data\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.270087 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m"] Jan 28 15:21:50 crc kubenswrapper[4981]: E0128 15:21:50.270709 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-vftnm], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" podUID="435cd25a-f7ab-4f22-b6ed-11c6ec21b944" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.270942 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rbls\" (UniqueName: \"kubernetes.io/projected/5a747315-c181-4459-ae1d-3c0c5252efb7-kube-api-access-4rbls\") pod \"cinder-db-sync-h8htg\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.297961 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pst9\" (UniqueName: \"kubernetes.io/projected/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-kube-api-access-9pst9\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.300221 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-config-data\") pod \"ceilometer-0\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.303022 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vftnm\" (UniqueName: \"kubernetes.io/projected/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-kube-api-access-vftnm\") pod \"dnsmasq-dns-5c5cc7c5ff-g5z8m\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.303094 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl2c9\" (UniqueName: \"kubernetes.io/projected/464cfecb-1d69-466f-90f1-9d9ac1166673-kube-api-access-sl2c9\") pod \"horizon-dd4f7ddbc-vtfnj\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.307674 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-scripts\") pod \"placement-db-sync-t6ht4\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.311604 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-d54b7587-prgkj"] Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.314909 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-scripts\") pod \"placement-db-sync-t6ht4\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.320715 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-config-data\") pod \"placement-db-sync-t6ht4\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.335021 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs4hx\" (UniqueName: \"kubernetes.io/projected/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-kube-api-access-zs4hx\") pod \"placement-db-sync-t6ht4\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.328338 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-config-data\") pod \"placement-db-sync-t6ht4\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.335454 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95cw8\" (UniqueName: \"kubernetes.io/projected/82101008-6112-4a68-8776-7a2c896b5eab-kube-api-access-95cw8\") pod \"barbican-db-sync-jd49r\" (UID: \"82101008-6112-4a68-8776-7a2c896b5eab\") " pod="openstack/barbican-db-sync-jd49r" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.337822 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-combined-ca-bundle\") pod \"placement-db-sync-t6ht4\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.337943 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82101008-6112-4a68-8776-7a2c896b5eab-db-sync-config-data\") pod \"barbican-db-sync-jd49r\" (UID: \"82101008-6112-4a68-8776-7a2c896b5eab\") " pod="openstack/barbican-db-sync-jd49r" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.338064 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-logs\") pod \"placement-db-sync-t6ht4\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.338216 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82101008-6112-4a68-8776-7a2c896b5eab-combined-ca-bundle\") pod \"barbican-db-sync-jd49r\" (UID: \"82101008-6112-4a68-8776-7a2c896b5eab\") " pod="openstack/barbican-db-sync-jd49r" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.338856 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-logs\") pod \"placement-db-sync-t6ht4\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.342663 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82101008-6112-4a68-8776-7a2c896b5eab-db-sync-config-data\") pod \"barbican-db-sync-jd49r\" (UID: \"82101008-6112-4a68-8776-7a2c896b5eab\") " pod="openstack/barbican-db-sync-jd49r" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.343378 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82101008-6112-4a68-8776-7a2c896b5eab-combined-ca-bundle\") pod \"barbican-db-sync-jd49r\" (UID: \"82101008-6112-4a68-8776-7a2c896b5eab\") " pod="openstack/barbican-db-sync-jd49r" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.351033 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d54b7587-prgkj"] Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.351150 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.370312 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-combined-ca-bundle\") pod \"placement-db-sync-t6ht4\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.380311 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs4hx\" (UniqueName: \"kubernetes.io/projected/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-kube-api-access-zs4hx\") pod \"placement-db-sync-t6ht4\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.385094 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95cw8\" (UniqueName: \"kubernetes.io/projected/82101008-6112-4a68-8776-7a2c896b5eab-kube-api-access-95cw8\") pod \"barbican-db-sync-jd49r\" (UID: \"82101008-6112-4a68-8776-7a2c896b5eab\") " pod="openstack/barbican-db-sync-jd49r" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.406560 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7q5wl"] Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.407923 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.428115 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7q5wl"] Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.440069 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.440129 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3afe70ea-3dbf-495d-99ab-c2d6af72d624-horizon-secret-key\") pod \"horizon-d54b7587-prgkj\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.440181 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-config\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.440226 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3afe70ea-3dbf-495d-99ab-c2d6af72d624-config-data\") pod \"horizon-d54b7587-prgkj\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.440259 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltltr\" (UniqueName: \"kubernetes.io/projected/fa17a405-daf2-4bec-ab6d-8337e759165a-kube-api-access-ltltr\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.440308 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbxgj\" (UniqueName: \"kubernetes.io/projected/3afe70ea-3dbf-495d-99ab-c2d6af72d624-kube-api-access-qbxgj\") pod \"horizon-d54b7587-prgkj\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.440340 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.440355 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afe70ea-3dbf-495d-99ab-c2d6af72d624-logs\") pod \"horizon-d54b7587-prgkj\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.440402 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.440460 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.440491 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3afe70ea-3dbf-495d-99ab-c2d6af72d624-scripts\") pod \"horizon-d54b7587-prgkj\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.465690 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-h9wgp" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.477615 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-h8htg" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.500653 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.541613 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-config\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.542006 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3afe70ea-3dbf-495d-99ab-c2d6af72d624-config-data\") pod \"horizon-d54b7587-prgkj\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.542060 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltltr\" (UniqueName: \"kubernetes.io/projected/fa17a405-daf2-4bec-ab6d-8337e759165a-kube-api-access-ltltr\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.542103 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbxgj\" (UniqueName: \"kubernetes.io/projected/3afe70ea-3dbf-495d-99ab-c2d6af72d624-kube-api-access-qbxgj\") pod \"horizon-d54b7587-prgkj\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.542141 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.542163 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afe70ea-3dbf-495d-99ab-c2d6af72d624-logs\") pod \"horizon-d54b7587-prgkj\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.542228 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.542275 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.542347 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3afe70ea-3dbf-495d-99ab-c2d6af72d624-scripts\") pod \"horizon-d54b7587-prgkj\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.542408 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.542443 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3afe70ea-3dbf-495d-99ab-c2d6af72d624-horizon-secret-key\") pod \"horizon-d54b7587-prgkj\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.543914 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-ovsdbserver-nb\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.543984 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afe70ea-3dbf-495d-99ab-c2d6af72d624-logs\") pod \"horizon-d54b7587-prgkj\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.546425 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-ovsdbserver-sb\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.547517 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-dns-swift-storage-0\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.547826 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-config\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.548110 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3afe70ea-3dbf-495d-99ab-c2d6af72d624-scripts\") pod \"horizon-d54b7587-prgkj\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.552856 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jd49r" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.566537 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltltr\" (UniqueName: \"kubernetes.io/projected/fa17a405-daf2-4bec-ab6d-8337e759165a-kube-api-access-ltltr\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.568825 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-dns-svc\") pod \"dnsmasq-dns-8b5c85b87-7q5wl\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.570411 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3afe70ea-3dbf-495d-99ab-c2d6af72d624-config-data\") pod \"horizon-d54b7587-prgkj\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.572176 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3afe70ea-3dbf-495d-99ab-c2d6af72d624-horizon-secret-key\") pod \"horizon-d54b7587-prgkj\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.573115 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbxgj\" (UniqueName: \"kubernetes.io/projected/3afe70ea-3dbf-495d-99ab-c2d6af72d624-kube-api-access-qbxgj\") pod \"horizon-d54b7587-prgkj\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.601517 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t6ht4" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.610615 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.710570 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.724830 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.773009 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-7xzlx"] Jan 28 15:21:50 crc kubenswrapper[4981]: I0128 15:21:50.853982 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kcxvv"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.101702 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.105094 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.118209 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.118390 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.118684 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hq4bg" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.119695 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.183150 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-scripts\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.183221 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36dcbfff-638b-477c-8fce-45998551949b-logs\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.183265 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36dcbfff-638b-477c-8fce-45998551949b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.183283 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bkt4\" (UniqueName: \"kubernetes.io/projected/36dcbfff-638b-477c-8fce-45998551949b-kube-api-access-5bkt4\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.183328 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.183545 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.183685 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-config-data\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.189848 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kcxvv" event={"ID":"98b74a60-bac3-481c-bb29-48e58ee43fed","Type":"ContainerStarted","Data":"038e9f39554e3fa15ed4de4f9adddfb0484167980744d39f0fff83476d974528"} Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.202222 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.202534 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" event={"ID":"5b8f224d-ac44-43c1-bf59-d16858e364ec","Type":"ContainerStarted","Data":"890ebb92b0bcf4b07811ff63c184ac6b24b46961531384244f6277574fadb4b8"} Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.215991 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-h9wgp"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.224689 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.285047 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-dns-svc\") pod \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.285149 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-config\") pod \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.285168 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-ovsdbserver-nb\") pod \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.285253 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-dns-swift-storage-0\") pod \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.285318 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-ovsdbserver-sb\") pod \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.285360 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vftnm\" (UniqueName: \"kubernetes.io/projected/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-kube-api-access-vftnm\") pod \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\" (UID: \"435cd25a-f7ab-4f22-b6ed-11c6ec21b944\") " Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.285540 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36dcbfff-638b-477c-8fce-45998551949b-logs\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.289653 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36dcbfff-638b-477c-8fce-45998551949b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.289719 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bkt4\" (UniqueName: \"kubernetes.io/projected/36dcbfff-638b-477c-8fce-45998551949b-kube-api-access-5bkt4\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.289771 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36dcbfff-638b-477c-8fce-45998551949b-logs\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.289868 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.289997 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.290070 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-config-data\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.290106 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-scripts\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.290342 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36dcbfff-638b-477c-8fce-45998551949b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.290622 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.290718 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-kube-api-access-vftnm" (OuterVolumeSpecName: "kube-api-access-vftnm") pod "435cd25a-f7ab-4f22-b6ed-11c6ec21b944" (UID: "435cd25a-f7ab-4f22-b6ed-11c6ec21b944"). InnerVolumeSpecName "kube-api-access-vftnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.294036 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-scripts\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.296135 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-config-data\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.303614 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.304667 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bkt4\" (UniqueName: \"kubernetes.io/projected/36dcbfff-638b-477c-8fce-45998551949b-kube-api-access-5bkt4\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.324445 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.393022 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vftnm\" (UniqueName: \"kubernetes.io/projected/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-kube-api-access-vftnm\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.395663 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.397313 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.402404 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.410043 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.436532 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.450684 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-h8htg"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.453898 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.494773 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-logs\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.494850 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.494884 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.494973 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.495102 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g26qn\" (UniqueName: \"kubernetes.io/projected/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-kube-api-access-g26qn\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.495160 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-config-data\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.495196 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-scripts\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.517336 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jd49r"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.528399 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-dd4f7ddbc-vtfnj"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.554885 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "435cd25a-f7ab-4f22-b6ed-11c6ec21b944" (UID: "435cd25a-f7ab-4f22-b6ed-11c6ec21b944"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.596915 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-scripts\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.597000 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-logs\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.597061 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.597107 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.597128 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.597158 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g26qn\" (UniqueName: \"kubernetes.io/projected/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-kube-api-access-g26qn\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.597213 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-config-data\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.597272 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.597354 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.597696 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-logs\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.599491 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.602797 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-scripts\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.603282 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-config-data\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.607568 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.617820 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-config" (OuterVolumeSpecName: "config") pod "435cd25a-f7ab-4f22-b6ed-11c6ec21b944" (UID: "435cd25a-f7ab-4f22-b6ed-11c6ec21b944"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.618346 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "435cd25a-f7ab-4f22-b6ed-11c6ec21b944" (UID: "435cd25a-f7ab-4f22-b6ed-11c6ec21b944"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.618774 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "435cd25a-f7ab-4f22-b6ed-11c6ec21b944" (UID: "435cd25a-f7ab-4f22-b6ed-11c6ec21b944"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.619399 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "435cd25a-f7ab-4f22-b6ed-11c6ec21b944" (UID: "435cd25a-f7ab-4f22-b6ed-11c6ec21b944"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.630542 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.644415 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g26qn\" (UniqueName: \"kubernetes.io/projected/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-kube-api-access-g26qn\") pod \"glance-default-internal-api-0\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.646035 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7q5wl"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.659066 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-d54b7587-prgkj"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.668456 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-t6ht4"] Jan 28 15:21:51 crc kubenswrapper[4981]: W0128 15:21:51.697491 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a747315_c181_4459_ae1d_3c0c5252efb7.slice/crio-c38114ba03bd3260954a61211889bd6620bfb71f24289d7ada18824cbf3b3bbe WatchSource:0}: Error finding container c38114ba03bd3260954a61211889bd6620bfb71f24289d7ada18824cbf3b3bbe: Status 404 returned error can't find the container with id c38114ba03bd3260954a61211889bd6620bfb71f24289d7ada18824cbf3b3bbe Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.698858 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.698881 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.698889 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.698897 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/435cd25a-f7ab-4f22-b6ed-11c6ec21b944-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:51 crc kubenswrapper[4981]: W0128 15:21:51.707060 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b1b3aa1_aaf2_4291_b735_00fc0ca3b455.slice/crio-b81b5988e2d2315302f8fc368a772f491894403a69f9daf490ccf6223ec9b1e0 WatchSource:0}: Error finding container b81b5988e2d2315302f8fc368a772f491894403a69f9daf490ccf6223ec9b1e0: Status 404 returned error can't find the container with id b81b5988e2d2315302f8fc368a772f491894403a69f9daf490ccf6223ec9b1e0 Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.718530 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 15:21:51 crc kubenswrapper[4981]: W0128 15:21:51.795831 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82101008_6112_4a68_8776_7a2c896b5eab.slice/crio-af2a087fb3908d036ced8e80ce114f1f5d5803fc1221d45403538320d4ad1987 WatchSource:0}: Error finding container af2a087fb3908d036ced8e80ce114f1f5d5803fc1221d45403538320d4ad1987: Status 404 returned error can't find the container with id af2a087fb3908d036ced8e80ce114f1f5d5803fc1221d45403538320d4ad1987 Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.837360 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.866126 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-dd4f7ddbc-vtfnj"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.876081 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.888024 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7cb9bfb74f-ppc62"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.889570 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.917784 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 15:21:51 crc kubenswrapper[4981]: I0128 15:21:51.936457 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cb9bfb74f-ppc62"] Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.032592 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e6751ea-45b4-493d-8a07-88d32d84625a-scripts\") pod \"horizon-7cb9bfb74f-ppc62\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.032854 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e6751ea-45b4-493d-8a07-88d32d84625a-logs\") pod \"horizon-7cb9bfb74f-ppc62\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.032879 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e6751ea-45b4-493d-8a07-88d32d84625a-config-data\") pod \"horizon-7cb9bfb74f-ppc62\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.032928 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kvm2\" (UniqueName: \"kubernetes.io/projected/0e6751ea-45b4-493d-8a07-88d32d84625a-kube-api-access-7kvm2\") pod \"horizon-7cb9bfb74f-ppc62\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.032963 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0e6751ea-45b4-493d-8a07-88d32d84625a-horizon-secret-key\") pod \"horizon-7cb9bfb74f-ppc62\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.134996 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e6751ea-45b4-493d-8a07-88d32d84625a-scripts\") pod \"horizon-7cb9bfb74f-ppc62\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.135342 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e6751ea-45b4-493d-8a07-88d32d84625a-logs\") pod \"horizon-7cb9bfb74f-ppc62\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.135370 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e6751ea-45b4-493d-8a07-88d32d84625a-config-data\") pod \"horizon-7cb9bfb74f-ppc62\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.135497 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kvm2\" (UniqueName: \"kubernetes.io/projected/0e6751ea-45b4-493d-8a07-88d32d84625a-kube-api-access-7kvm2\") pod \"horizon-7cb9bfb74f-ppc62\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.135536 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0e6751ea-45b4-493d-8a07-88d32d84625a-horizon-secret-key\") pod \"horizon-7cb9bfb74f-ppc62\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.135975 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e6751ea-45b4-493d-8a07-88d32d84625a-scripts\") pod \"horizon-7cb9bfb74f-ppc62\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.136707 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e6751ea-45b4-493d-8a07-88d32d84625a-logs\") pod \"horizon-7cb9bfb74f-ppc62\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.137172 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e6751ea-45b4-493d-8a07-88d32d84625a-config-data\") pod \"horizon-7cb9bfb74f-ppc62\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.140936 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0e6751ea-45b4-493d-8a07-88d32d84625a-horizon-secret-key\") pod \"horizon-7cb9bfb74f-ppc62\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.152518 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kvm2\" (UniqueName: \"kubernetes.io/projected/0e6751ea-45b4-493d-8a07-88d32d84625a-kube-api-access-7kvm2\") pod \"horizon-7cb9bfb74f-ppc62\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.217447 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jd49r" event={"ID":"82101008-6112-4a68-8776-7a2c896b5eab","Type":"ContainerStarted","Data":"af2a087fb3908d036ced8e80ce114f1f5d5803fc1221d45403538320d4ad1987"} Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.223357 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455","Type":"ContainerStarted","Data":"b81b5988e2d2315302f8fc368a772f491894403a69f9daf490ccf6223ec9b1e0"} Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.224685 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" event={"ID":"fa17a405-daf2-4bec-ab6d-8337e759165a","Type":"ContainerStarted","Data":"d8db9a110e776618d6df8eba6566ce3c3943664a763b5c49db457f8e7ccd500f"} Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.225684 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-h8htg" event={"ID":"5a747315-c181-4459-ae1d-3c0c5252efb7","Type":"ContainerStarted","Data":"c38114ba03bd3260954a61211889bd6620bfb71f24289d7ada18824cbf3b3bbe"} Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.226931 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d54b7587-prgkj" event={"ID":"3afe70ea-3dbf-495d-99ab-c2d6af72d624","Type":"ContainerStarted","Data":"54fc8ae2ae995327b7798e2e7153031d8cba03a74a4cb3b6ff29502e0f173e38"} Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.227836 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t6ht4" event={"ID":"7722b5f2-e226-483f-9ae3-d2b5a9e5a605","Type":"ContainerStarted","Data":"b86c3dd34b8aed9dc2c11ff2dc07d11e1c58dd47c9bd07ae529a17a2474d316f"} Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.229500 4981 generic.go:334] "Generic (PLEG): container finished" podID="5b8f224d-ac44-43c1-bf59-d16858e364ec" containerID="b6297a2c4271c40a12165bd1fbba8641472bdc4cc7fc4b38e0b91559505470cf" exitCode=0 Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.229679 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" event={"ID":"5b8f224d-ac44-43c1-bf59-d16858e364ec","Type":"ContainerDied","Data":"b6297a2c4271c40a12165bd1fbba8641472bdc4cc7fc4b38e0b91559505470cf"} Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.230882 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dd4f7ddbc-vtfnj" event={"ID":"464cfecb-1d69-466f-90f1-9d9ac1166673","Type":"ContainerStarted","Data":"fda0cc754d907a974336901c63b9942c13be90e41744747c612d4f79112eb39e"} Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.238376 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kcxvv" event={"ID":"98b74a60-bac3-481c-bb29-48e58ee43fed","Type":"ContainerStarted","Data":"3971960b8f145b8779b46839ce085fbe4914bd40df499fe01532190534f6efba"} Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.239673 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.239925 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-h9wgp" event={"ID":"150ae7b2-4b64-48c7-86b3-71d7841afba3","Type":"ContainerStarted","Data":"613412e27f2b9f40ad052e3fd85249747f27e4ea02593239fabaeeb143bb57ac"} Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.279323 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kcxvv" podStartSLOduration=3.279307428 podStartE2EDuration="3.279307428s" podCreationTimestamp="2026-01-28 15:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:52.271447712 +0000 UTC m=+1123.723605953" watchObservedRunningTime="2026-01-28 15:21:52.279307428 +0000 UTC m=+1123.731465669" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.299516 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.310568 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m"] Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.316956 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c5cc7c5ff-g5z8m"] Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.469436 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 15:21:52 crc kubenswrapper[4981]: W0128 15:21:52.492904 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31d6290e_9ed6_44f8_bb2a_7c759b99ea82.slice/crio-cd498c1cea016644a7e1abead2c20a1d21c5166c809309dfa91126762abb8711 WatchSource:0}: Error finding container cd498c1cea016644a7e1abead2c20a1d21c5166c809309dfa91126762abb8711: Status 404 returned error can't find the container with id cd498c1cea016644a7e1abead2c20a1d21c5166c809309dfa91126762abb8711 Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.508719 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.568884 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.646917 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpbvs\" (UniqueName: \"kubernetes.io/projected/5b8f224d-ac44-43c1-bf59-d16858e364ec-kube-api-access-gpbvs\") pod \"5b8f224d-ac44-43c1-bf59-d16858e364ec\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.646961 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-ovsdbserver-sb\") pod \"5b8f224d-ac44-43c1-bf59-d16858e364ec\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.647009 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-ovsdbserver-nb\") pod \"5b8f224d-ac44-43c1-bf59-d16858e364ec\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.647030 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-config\") pod \"5b8f224d-ac44-43c1-bf59-d16858e364ec\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.647146 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-dns-svc\") pod \"5b8f224d-ac44-43c1-bf59-d16858e364ec\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.647181 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-dns-swift-storage-0\") pod \"5b8f224d-ac44-43c1-bf59-d16858e364ec\" (UID: \"5b8f224d-ac44-43c1-bf59-d16858e364ec\") " Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.653038 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b8f224d-ac44-43c1-bf59-d16858e364ec-kube-api-access-gpbvs" (OuterVolumeSpecName: "kube-api-access-gpbvs") pod "5b8f224d-ac44-43c1-bf59-d16858e364ec" (UID: "5b8f224d-ac44-43c1-bf59-d16858e364ec"). InnerVolumeSpecName "kube-api-access-gpbvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.669229 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-config" (OuterVolumeSpecName: "config") pod "5b8f224d-ac44-43c1-bf59-d16858e364ec" (UID: "5b8f224d-ac44-43c1-bf59-d16858e364ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.674941 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5b8f224d-ac44-43c1-bf59-d16858e364ec" (UID: "5b8f224d-ac44-43c1-bf59-d16858e364ec"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.680500 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5b8f224d-ac44-43c1-bf59-d16858e364ec" (UID: "5b8f224d-ac44-43c1-bf59-d16858e364ec"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.687259 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5b8f224d-ac44-43c1-bf59-d16858e364ec" (UID: "5b8f224d-ac44-43c1-bf59-d16858e364ec"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.700638 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5b8f224d-ac44-43c1-bf59-d16858e364ec" (UID: "5b8f224d-ac44-43c1-bf59-d16858e364ec"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.748922 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpbvs\" (UniqueName: \"kubernetes.io/projected/5b8f224d-ac44-43c1-bf59-d16858e364ec-kube-api-access-gpbvs\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.748956 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.748965 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.748975 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.748986 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.748996 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5b8f224d-ac44-43c1-bf59-d16858e364ec-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:21:52 crc kubenswrapper[4981]: I0128 15:21:52.854152 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7cb9bfb74f-ppc62"] Jan 28 15:21:52 crc kubenswrapper[4981]: W0128 15:21:52.905656 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e6751ea_45b4_493d_8a07_88d32d84625a.slice/crio-dbc5e910ca9c040e4e4e4abca239022c4eb40b28ce743707e5b65e6504930dd2 WatchSource:0}: Error finding container dbc5e910ca9c040e4e4e4abca239022c4eb40b28ce743707e5b65e6504930dd2: Status 404 returned error can't find the container with id dbc5e910ca9c040e4e4e4abca239022c4eb40b28ce743707e5b65e6504930dd2 Jan 28 15:21:53 crc kubenswrapper[4981]: I0128 15:21:53.249061 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36dcbfff-638b-477c-8fce-45998551949b","Type":"ContainerStarted","Data":"92b9f31c0c39ca2471b57f6f57ec56f99f8afe94e6faa2cd2f1b20c9ad606197"} Jan 28 15:21:53 crc kubenswrapper[4981]: I0128 15:21:53.251488 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cb9bfb74f-ppc62" event={"ID":"0e6751ea-45b4-493d-8a07-88d32d84625a","Type":"ContainerStarted","Data":"dbc5e910ca9c040e4e4e4abca239022c4eb40b28ce743707e5b65e6504930dd2"} Jan 28 15:21:53 crc kubenswrapper[4981]: I0128 15:21:53.253066 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"31d6290e-9ed6-44f8-bb2a-7c759b99ea82","Type":"ContainerStarted","Data":"cd498c1cea016644a7e1abead2c20a1d21c5166c809309dfa91126762abb8711"} Jan 28 15:21:53 crc kubenswrapper[4981]: I0128 15:21:53.254647 4981 generic.go:334] "Generic (PLEG): container finished" podID="fa17a405-daf2-4bec-ab6d-8337e759165a" containerID="590ff380f6f9505e3ce74379ea190125f952fef6562a97e1c051b441e3ce9cc6" exitCode=0 Jan 28 15:21:53 crc kubenswrapper[4981]: I0128 15:21:53.254720 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" event={"ID":"fa17a405-daf2-4bec-ab6d-8337e759165a","Type":"ContainerDied","Data":"590ff380f6f9505e3ce74379ea190125f952fef6562a97e1c051b441e3ce9cc6"} Jan 28 15:21:53 crc kubenswrapper[4981]: I0128 15:21:53.259139 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-h9wgp" event={"ID":"150ae7b2-4b64-48c7-86b3-71d7841afba3","Type":"ContainerStarted","Data":"4daa23ca251918f431c71796a106f059b31f9684e7815db0ea87cfcbd133962d"} Jan 28 15:21:53 crc kubenswrapper[4981]: I0128 15:21:53.264442 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:21:53 crc kubenswrapper[4981]: I0128 15:21:53.264594 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" event={"ID":"5b8f224d-ac44-43c1-bf59-d16858e364ec","Type":"ContainerDied","Data":"890ebb92b0bcf4b07811ff63c184ac6b24b46961531384244f6277574fadb4b8"} Jan 28 15:21:53 crc kubenswrapper[4981]: I0128 15:21:53.264639 4981 scope.go:117] "RemoveContainer" containerID="b6297a2c4271c40a12165bd1fbba8641472bdc4cc7fc4b38e0b91559505470cf" Jan 28 15:21:53 crc kubenswrapper[4981]: I0128 15:21:53.338988 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="435cd25a-f7ab-4f22-b6ed-11c6ec21b944" path="/var/lib/kubelet/pods/435cd25a-f7ab-4f22-b6ed-11c6ec21b944/volumes" Jan 28 15:21:53 crc kubenswrapper[4981]: I0128 15:21:53.346633 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-h9wgp" podStartSLOduration=4.346614572 podStartE2EDuration="4.346614572s" podCreationTimestamp="2026-01-28 15:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:53.295358786 +0000 UTC m=+1124.747517027" watchObservedRunningTime="2026-01-28 15:21:53.346614572 +0000 UTC m=+1124.798772803" Jan 28 15:21:54 crc kubenswrapper[4981]: I0128 15:21:54.318257 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36dcbfff-638b-477c-8fce-45998551949b","Type":"ContainerStarted","Data":"739ec6b9d11796379430bfea0145e649df712293a6aa8b3459fa9836acef4d6a"} Jan 28 15:21:54 crc kubenswrapper[4981]: I0128 15:21:54.318616 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36dcbfff-638b-477c-8fce-45998551949b","Type":"ContainerStarted","Data":"daaca4b71d2e1f7d97deac74d50110dd8a0f9e753a26ecc6c16b2727a39112e2"} Jan 28 15:21:54 crc kubenswrapper[4981]: I0128 15:21:54.318489 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="36dcbfff-638b-477c-8fce-45998551949b" containerName="glance-log" containerID="cri-o://739ec6b9d11796379430bfea0145e649df712293a6aa8b3459fa9836acef4d6a" gracePeriod=30 Jan 28 15:21:54 crc kubenswrapper[4981]: I0128 15:21:54.318663 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="36dcbfff-638b-477c-8fce-45998551949b" containerName="glance-httpd" containerID="cri-o://daaca4b71d2e1f7d97deac74d50110dd8a0f9e753a26ecc6c16b2727a39112e2" gracePeriod=30 Jan 28 15:21:54 crc kubenswrapper[4981]: I0128 15:21:54.321803 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"31d6290e-9ed6-44f8-bb2a-7c759b99ea82","Type":"ContainerStarted","Data":"46ae4addff2217ea4b9f2550bb6a0c9d004f0bb0598d685dfec010f6956885f0"} Jan 28 15:21:54 crc kubenswrapper[4981]: I0128 15:21:54.321835 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"31d6290e-9ed6-44f8-bb2a-7c759b99ea82","Type":"ContainerStarted","Data":"1400fdd1740fec5667aa1a20024c06de82ce3ec38e36b5d30951c438a25c329c"} Jan 28 15:21:54 crc kubenswrapper[4981]: I0128 15:21:54.321920 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="31d6290e-9ed6-44f8-bb2a-7c759b99ea82" containerName="glance-log" containerID="cri-o://46ae4addff2217ea4b9f2550bb6a0c9d004f0bb0598d685dfec010f6956885f0" gracePeriod=30 Jan 28 15:21:54 crc kubenswrapper[4981]: I0128 15:21:54.321983 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="31d6290e-9ed6-44f8-bb2a-7c759b99ea82" containerName="glance-httpd" containerID="cri-o://1400fdd1740fec5667aa1a20024c06de82ce3ec38e36b5d30951c438a25c329c" gracePeriod=30 Jan 28 15:21:54 crc kubenswrapper[4981]: I0128 15:21:54.333081 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" event={"ID":"fa17a405-daf2-4bec-ab6d-8337e759165a","Type":"ContainerStarted","Data":"db3e802e110f6f8ccadb679ee82b0cb801c753bff29cb8232f82cd84175ed503"} Jan 28 15:21:54 crc kubenswrapper[4981]: I0128 15:21:54.333496 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:21:54 crc kubenswrapper[4981]: I0128 15:21:54.362652 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.362635129 podStartE2EDuration="4.362635129s" podCreationTimestamp="2026-01-28 15:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:54.341909375 +0000 UTC m=+1125.794067626" watchObservedRunningTime="2026-01-28 15:21:54.362635129 +0000 UTC m=+1125.814793370" Jan 28 15:21:54 crc kubenswrapper[4981]: I0128 15:21:54.368987 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.368968445 podStartE2EDuration="4.368968445s" podCreationTimestamp="2026-01-28 15:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:54.363771299 +0000 UTC m=+1125.815929530" watchObservedRunningTime="2026-01-28 15:21:54.368968445 +0000 UTC m=+1125.821126686" Jan 28 15:21:54 crc kubenswrapper[4981]: I0128 15:21:54.383419 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" podStartSLOduration=4.383404494 podStartE2EDuration="4.383404494s" podCreationTimestamp="2026-01-28 15:21:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:21:54.380919309 +0000 UTC m=+1125.833077550" watchObservedRunningTime="2026-01-28 15:21:54.383404494 +0000 UTC m=+1125.835562735" Jan 28 15:21:55 crc kubenswrapper[4981]: I0128 15:21:55.344887 4981 generic.go:334] "Generic (PLEG): container finished" podID="36dcbfff-638b-477c-8fce-45998551949b" containerID="daaca4b71d2e1f7d97deac74d50110dd8a0f9e753a26ecc6c16b2727a39112e2" exitCode=143 Jan 28 15:21:55 crc kubenswrapper[4981]: I0128 15:21:55.344931 4981 generic.go:334] "Generic (PLEG): container finished" podID="36dcbfff-638b-477c-8fce-45998551949b" containerID="739ec6b9d11796379430bfea0145e649df712293a6aa8b3459fa9836acef4d6a" exitCode=143 Jan 28 15:21:55 crc kubenswrapper[4981]: I0128 15:21:55.344942 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36dcbfff-638b-477c-8fce-45998551949b","Type":"ContainerDied","Data":"daaca4b71d2e1f7d97deac74d50110dd8a0f9e753a26ecc6c16b2727a39112e2"} Jan 28 15:21:55 crc kubenswrapper[4981]: I0128 15:21:55.344992 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36dcbfff-638b-477c-8fce-45998551949b","Type":"ContainerDied","Data":"739ec6b9d11796379430bfea0145e649df712293a6aa8b3459fa9836acef4d6a"} Jan 28 15:21:55 crc kubenswrapper[4981]: I0128 15:21:55.351008 4981 generic.go:334] "Generic (PLEG): container finished" podID="31d6290e-9ed6-44f8-bb2a-7c759b99ea82" containerID="1400fdd1740fec5667aa1a20024c06de82ce3ec38e36b5d30951c438a25c329c" exitCode=143 Jan 28 15:21:55 crc kubenswrapper[4981]: I0128 15:21:55.351040 4981 generic.go:334] "Generic (PLEG): container finished" podID="31d6290e-9ed6-44f8-bb2a-7c759b99ea82" containerID="46ae4addff2217ea4b9f2550bb6a0c9d004f0bb0598d685dfec010f6956885f0" exitCode=143 Jan 28 15:21:55 crc kubenswrapper[4981]: I0128 15:21:55.351067 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"31d6290e-9ed6-44f8-bb2a-7c759b99ea82","Type":"ContainerDied","Data":"1400fdd1740fec5667aa1a20024c06de82ce3ec38e36b5d30951c438a25c329c"} Jan 28 15:21:55 crc kubenswrapper[4981]: I0128 15:21:55.351112 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"31d6290e-9ed6-44f8-bb2a-7c759b99ea82","Type":"ContainerDied","Data":"46ae4addff2217ea4b9f2550bb6a0c9d004f0bb0598d685dfec010f6956885f0"} Jan 28 15:22:00 crc kubenswrapper[4981]: I0128 15:22:00.726795 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:22:00 crc kubenswrapper[4981]: I0128 15:22:00.801554 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-k8fts"] Jan 28 15:22:00 crc kubenswrapper[4981]: I0128 15:22:00.803412 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" podUID="200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" containerName="dnsmasq-dns" containerID="cri-o://ab2a0c3b00b1e84b7c8d81cf457df0ea9d9a59e155067872470da7822793dd35" gracePeriod=10 Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.212025 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d54b7587-prgkj"] Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.258819 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-54f45b9c5b-drxcg"] Jan 28 15:22:01 crc kubenswrapper[4981]: E0128 15:22:01.259283 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b8f224d-ac44-43c1-bf59-d16858e364ec" containerName="init" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.259299 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b8f224d-ac44-43c1-bf59-d16858e364ec" containerName="init" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.259517 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b8f224d-ac44-43c1-bf59-d16858e364ec" containerName="init" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.260658 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.267074 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.275982 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54f45b9c5b-drxcg"] Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.305915 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cb9bfb74f-ppc62"] Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.347822 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-combined-ca-bundle\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.347892 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8vl8\" (UniqueName: \"kubernetes.io/projected/465c6840-8900-4520-b80a-aab52f45c173-kube-api-access-x8vl8\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.347960 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/465c6840-8900-4520-b80a-aab52f45c173-config-data\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.347988 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465c6840-8900-4520-b80a-aab52f45c173-logs\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.348017 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-horizon-tls-certs\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.348064 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-horizon-secret-key\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.348092 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/465c6840-8900-4520-b80a-aab52f45c173-scripts\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.350580 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d9d89fcfb-mwsgh"] Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.352200 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.355941 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d9d89fcfb-mwsgh"] Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.428525 4981 generic.go:334] "Generic (PLEG): container finished" podID="200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" containerID="ab2a0c3b00b1e84b7c8d81cf457df0ea9d9a59e155067872470da7822793dd35" exitCode=0 Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.428562 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" event={"ID":"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0","Type":"ContainerDied","Data":"ab2a0c3b00b1e84b7c8d81cf457df0ea9d9a59e155067872470da7822793dd35"} Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.449317 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-horizon-secret-key\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.449380 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v64ht\" (UniqueName: \"kubernetes.io/projected/d02db79a-7f4f-453c-8e92-2e8291f442f1-kube-api-access-v64ht\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.449404 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/465c6840-8900-4520-b80a-aab52f45c173-scripts\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.449430 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-combined-ca-bundle\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.449460 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8vl8\" (UniqueName: \"kubernetes.io/projected/465c6840-8900-4520-b80a-aab52f45c173-kube-api-access-x8vl8\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.449494 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d02db79a-7f4f-453c-8e92-2e8291f442f1-scripts\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.449520 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d02db79a-7f4f-453c-8e92-2e8291f442f1-horizon-secret-key\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.449554 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d02db79a-7f4f-453c-8e92-2e8291f442f1-horizon-tls-certs\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.449609 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d02db79a-7f4f-453c-8e92-2e8291f442f1-config-data\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.449652 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02db79a-7f4f-453c-8e92-2e8291f442f1-logs\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.449674 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/465c6840-8900-4520-b80a-aab52f45c173-config-data\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.449695 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02db79a-7f4f-453c-8e92-2e8291f442f1-combined-ca-bundle\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.449713 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465c6840-8900-4520-b80a-aab52f45c173-logs\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.449746 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-horizon-tls-certs\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.450277 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/465c6840-8900-4520-b80a-aab52f45c173-scripts\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.450413 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465c6840-8900-4520-b80a-aab52f45c173-logs\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.450906 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/465c6840-8900-4520-b80a-aab52f45c173-config-data\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.456788 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-horizon-secret-key\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.456801 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-combined-ca-bundle\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.457178 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-horizon-tls-certs\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.475363 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8vl8\" (UniqueName: \"kubernetes.io/projected/465c6840-8900-4520-b80a-aab52f45c173-kube-api-access-x8vl8\") pod \"horizon-54f45b9c5b-drxcg\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.550471 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d02db79a-7f4f-453c-8e92-2e8291f442f1-scripts\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.550518 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d02db79a-7f4f-453c-8e92-2e8291f442f1-horizon-secret-key\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.550547 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d02db79a-7f4f-453c-8e92-2e8291f442f1-horizon-tls-certs\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.550563 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d02db79a-7f4f-453c-8e92-2e8291f442f1-config-data\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.550585 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02db79a-7f4f-453c-8e92-2e8291f442f1-logs\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.550607 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02db79a-7f4f-453c-8e92-2e8291f442f1-combined-ca-bundle\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.550676 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v64ht\" (UniqueName: \"kubernetes.io/projected/d02db79a-7f4f-453c-8e92-2e8291f442f1-kube-api-access-v64ht\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.551736 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d02db79a-7f4f-453c-8e92-2e8291f442f1-logs\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.551894 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d02db79a-7f4f-453c-8e92-2e8291f442f1-scripts\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.552207 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d02db79a-7f4f-453c-8e92-2e8291f442f1-config-data\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.554988 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/d02db79a-7f4f-453c-8e92-2e8291f442f1-horizon-tls-certs\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.555315 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d02db79a-7f4f-453c-8e92-2e8291f442f1-combined-ca-bundle\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.559777 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d02db79a-7f4f-453c-8e92-2e8291f442f1-horizon-secret-key\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.573913 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v64ht\" (UniqueName: \"kubernetes.io/projected/d02db79a-7f4f-453c-8e92-2e8291f442f1-kube-api-access-v64ht\") pod \"horizon-6d9d89fcfb-mwsgh\" (UID: \"d02db79a-7f4f-453c-8e92-2e8291f442f1\") " pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.601905 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:01 crc kubenswrapper[4981]: I0128 15:22:01.680534 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:02 crc kubenswrapper[4981]: I0128 15:22:02.557077 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" podUID="200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Jan 28 15:22:03 crc kubenswrapper[4981]: I0128 15:22:03.446413 4981 generic.go:334] "Generic (PLEG): container finished" podID="98b74a60-bac3-481c-bb29-48e58ee43fed" containerID="3971960b8f145b8779b46839ce085fbe4914bd40df499fe01532190534f6efba" exitCode=0 Jan 28 15:22:03 crc kubenswrapper[4981]: I0128 15:22:03.446462 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kcxvv" event={"ID":"98b74a60-bac3-481c-bb29-48e58ee43fed","Type":"ContainerDied","Data":"3971960b8f145b8779b46839ce085fbe4914bd40df499fe01532190534f6efba"} Jan 28 15:22:07 crc kubenswrapper[4981]: I0128 15:22:07.557857 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" podUID="200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Jan 28 15:22:11 crc kubenswrapper[4981]: E0128 15:22:11.025493 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 28 15:22:11 crc kubenswrapper[4981]: E0128 15:22:11.029304 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n75h666h549hbbh656hd5hc5h7ch7bh86h7fh548h7dh87hb5h6dh7dhc5hdbh5bdh75h5fdh55ch87h644h8bh5cfh654hbch76h576h95q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qbxgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-d54b7587-prgkj_openstack(3afe70ea-3dbf-495d-99ab-c2d6af72d624): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:22:11 crc kubenswrapper[4981]: E0128 15:22:11.031885 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-d54b7587-prgkj" podUID="3afe70ea-3dbf-495d-99ab-c2d6af72d624" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.150060 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.159352 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: E0128 15:22:11.256529 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 28 15:22:11 crc kubenswrapper[4981]: E0128 15:22:11.256756 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65fh59dh577h65hb7h54bh687h56dh5bfh5bch655h579h65bh76h58fh565h548hfbh54ch55fh5fdhb7h68dh676h8fh99h658h66bh574hd9h69h585q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kvm2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-7cb9bfb74f-ppc62_openstack(0e6751ea-45b4-493d-8a07-88d32d84625a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:22:11 crc kubenswrapper[4981]: E0128 15:22:11.258849 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-7cb9bfb74f-ppc62" podUID="0e6751ea-45b4-493d-8a07-88d32d84625a" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.278548 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-scripts\") pod \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.278600 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-config-data\") pod \"36dcbfff-638b-477c-8fce-45998551949b\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.278631 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36dcbfff-638b-477c-8fce-45998551949b-logs\") pod \"36dcbfff-638b-477c-8fce-45998551949b\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.278683 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-combined-ca-bundle\") pod \"36dcbfff-638b-477c-8fce-45998551949b\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.278707 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-logs\") pod \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.278780 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-combined-ca-bundle\") pod \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.278814 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g26qn\" (UniqueName: \"kubernetes.io/projected/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-kube-api-access-g26qn\") pod \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.278866 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"36dcbfff-638b-477c-8fce-45998551949b\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.278915 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-scripts\") pod \"36dcbfff-638b-477c-8fce-45998551949b\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.278949 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bkt4\" (UniqueName: \"kubernetes.io/projected/36dcbfff-638b-477c-8fce-45998551949b-kube-api-access-5bkt4\") pod \"36dcbfff-638b-477c-8fce-45998551949b\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.279004 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-config-data\") pod \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.279069 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.279091 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-httpd-run\") pod \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\" (UID: \"31d6290e-9ed6-44f8-bb2a-7c759b99ea82\") " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.279134 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36dcbfff-638b-477c-8fce-45998551949b-httpd-run\") pod \"36dcbfff-638b-477c-8fce-45998551949b\" (UID: \"36dcbfff-638b-477c-8fce-45998551949b\") " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.279498 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36dcbfff-638b-477c-8fce-45998551949b-logs" (OuterVolumeSpecName: "logs") pod "36dcbfff-638b-477c-8fce-45998551949b" (UID: "36dcbfff-638b-477c-8fce-45998551949b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.279943 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36dcbfff-638b-477c-8fce-45998551949b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "36dcbfff-638b-477c-8fce-45998551949b" (UID: "36dcbfff-638b-477c-8fce-45998551949b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.279995 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-logs" (OuterVolumeSpecName: "logs") pod "31d6290e-9ed6-44f8-bb2a-7c759b99ea82" (UID: "31d6290e-9ed6-44f8-bb2a-7c759b99ea82"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.281126 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "31d6290e-9ed6-44f8-bb2a-7c759b99ea82" (UID: "31d6290e-9ed6-44f8-bb2a-7c759b99ea82"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.286554 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "31d6290e-9ed6-44f8-bb2a-7c759b99ea82" (UID: "31d6290e-9ed6-44f8-bb2a-7c759b99ea82"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.286558 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-scripts" (OuterVolumeSpecName: "scripts") pod "31d6290e-9ed6-44f8-bb2a-7c759b99ea82" (UID: "31d6290e-9ed6-44f8-bb2a-7c759b99ea82"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.286782 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-kube-api-access-g26qn" (OuterVolumeSpecName: "kube-api-access-g26qn") pod "31d6290e-9ed6-44f8-bb2a-7c759b99ea82" (UID: "31d6290e-9ed6-44f8-bb2a-7c759b99ea82"). InnerVolumeSpecName "kube-api-access-g26qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.292384 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-scripts" (OuterVolumeSpecName: "scripts") pod "36dcbfff-638b-477c-8fce-45998551949b" (UID: "36dcbfff-638b-477c-8fce-45998551949b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.293406 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36dcbfff-638b-477c-8fce-45998551949b-kube-api-access-5bkt4" (OuterVolumeSpecName: "kube-api-access-5bkt4") pod "36dcbfff-638b-477c-8fce-45998551949b" (UID: "36dcbfff-638b-477c-8fce-45998551949b"). InnerVolumeSpecName "kube-api-access-5bkt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.299089 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "36dcbfff-638b-477c-8fce-45998551949b" (UID: "36dcbfff-638b-477c-8fce-45998551949b"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.310466 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36dcbfff-638b-477c-8fce-45998551949b" (UID: "36dcbfff-638b-477c-8fce-45998551949b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.315531 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31d6290e-9ed6-44f8-bb2a-7c759b99ea82" (UID: "31d6290e-9ed6-44f8-bb2a-7c759b99ea82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.329695 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-config-data" (OuterVolumeSpecName: "config-data") pod "31d6290e-9ed6-44f8-bb2a-7c759b99ea82" (UID: "31d6290e-9ed6-44f8-bb2a-7c759b99ea82"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.363305 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-config-data" (OuterVolumeSpecName: "config-data") pod "36dcbfff-638b-477c-8fce-45998551949b" (UID: "36dcbfff-638b-477c-8fce-45998551949b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.381434 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.381614 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bkt4\" (UniqueName: \"kubernetes.io/projected/36dcbfff-638b-477c-8fce-45998551949b-kube-api-access-5bkt4\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.381763 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.382177 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.382324 4981 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.382448 4981 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/36dcbfff-638b-477c-8fce-45998551949b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.382559 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.382673 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.382781 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36dcbfff-638b-477c-8fce-45998551949b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.383060 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36dcbfff-638b-477c-8fce-45998551949b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.383240 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.383381 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.383530 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g26qn\" (UniqueName: \"kubernetes.io/projected/31d6290e-9ed6-44f8-bb2a-7c759b99ea82-kube-api-access-g26qn\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.383679 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.405790 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.410442 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.486033 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.486084 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.529576 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"36dcbfff-638b-477c-8fce-45998551949b","Type":"ContainerDied","Data":"92b9f31c0c39ca2471b57f6f57ec56f99f8afe94e6faa2cd2f1b20c9ad606197"} Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.529924 4981 scope.go:117] "RemoveContainer" containerID="daaca4b71d2e1f7d97deac74d50110dd8a0f9e753a26ecc6c16b2727a39112e2" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.530273 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.538037 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.538698 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"31d6290e-9ed6-44f8-bb2a-7c759b99ea82","Type":"ContainerDied","Data":"cd498c1cea016644a7e1abead2c20a1d21c5166c809309dfa91126762abb8711"} Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.629774 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.638955 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.659870 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.700237 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.708653 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 15:22:11 crc kubenswrapper[4981]: E0128 15:22:11.709022 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36dcbfff-638b-477c-8fce-45998551949b" containerName="glance-log" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.709038 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="36dcbfff-638b-477c-8fce-45998551949b" containerName="glance-log" Jan 28 15:22:11 crc kubenswrapper[4981]: E0128 15:22:11.709057 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d6290e-9ed6-44f8-bb2a-7c759b99ea82" containerName="glance-log" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.709063 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d6290e-9ed6-44f8-bb2a-7c759b99ea82" containerName="glance-log" Jan 28 15:22:11 crc kubenswrapper[4981]: E0128 15:22:11.709078 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31d6290e-9ed6-44f8-bb2a-7c759b99ea82" containerName="glance-httpd" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.709084 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="31d6290e-9ed6-44f8-bb2a-7c759b99ea82" containerName="glance-httpd" Jan 28 15:22:11 crc kubenswrapper[4981]: E0128 15:22:11.709107 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36dcbfff-638b-477c-8fce-45998551949b" containerName="glance-httpd" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.709113 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="36dcbfff-638b-477c-8fce-45998551949b" containerName="glance-httpd" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.709286 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="36dcbfff-638b-477c-8fce-45998551949b" containerName="glance-log" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.709297 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="31d6290e-9ed6-44f8-bb2a-7c759b99ea82" containerName="glance-log" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.709310 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="31d6290e-9ed6-44f8-bb2a-7c759b99ea82" containerName="glance-httpd" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.709318 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="36dcbfff-638b-477c-8fce-45998551949b" containerName="glance-httpd" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.710139 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.713212 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-hq4bg" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.713363 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.713474 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.713568 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.721576 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.734251 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.735947 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.739420 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.739597 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.761836 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894209 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894556 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894585 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-scripts\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894605 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894644 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894670 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894696 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894727 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894742 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894768 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894784 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba373618-9613-48d3-9023-ce519f54fb7f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894804 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894822 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba373618-9613-48d3-9023-ce519f54fb7f-logs\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894857 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b5fd\" (UniqueName: \"kubernetes.io/projected/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-kube-api-access-7b5fd\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894874 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97p98\" (UniqueName: \"kubernetes.io/projected/ba373618-9613-48d3-9023-ce519f54fb7f-kube-api-access-97p98\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:11 crc kubenswrapper[4981]: I0128 15:22:11.894896 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-config-data\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.995966 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b5fd\" (UniqueName: \"kubernetes.io/projected/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-kube-api-access-7b5fd\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996003 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97p98\" (UniqueName: \"kubernetes.io/projected/ba373618-9613-48d3-9023-ce519f54fb7f-kube-api-access-97p98\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996030 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-config-data\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996075 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996095 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996117 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-scripts\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996133 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996165 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996198 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996217 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996242 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996261 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996286 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996302 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba373618-9613-48d3-9023-ce519f54fb7f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996322 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996339 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba373618-9613-48d3-9023-ce519f54fb7f-logs\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.996783 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba373618-9613-48d3-9023-ce519f54fb7f-logs\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.998218 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.998715 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba373618-9613-48d3-9023-ce519f54fb7f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.998909 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.999476 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-logs\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:11.999822 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.000583 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-config-data\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.001401 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.001400 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-scripts\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.001593 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.014203 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.014219 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.015631 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.015994 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.018334 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97p98\" (UniqueName: \"kubernetes.io/projected/ba373618-9613-48d3-9023-ce519f54fb7f-kube-api-access-97p98\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.019146 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b5fd\" (UniqueName: \"kubernetes.io/projected/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-kube-api-access-7b5fd\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.044604 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.057585 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.078128 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.328482 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.558608 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" podUID="200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:12.558733 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:13.332599 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d6290e-9ed6-44f8-bb2a-7c759b99ea82" path="/var/lib/kubelet/pods/31d6290e-9ed6-44f8-bb2a-7c759b99ea82/volumes" Jan 28 15:22:13 crc kubenswrapper[4981]: I0128 15:22:13.334037 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36dcbfff-638b-477c-8fce-45998551949b" path="/var/lib/kubelet/pods/36dcbfff-638b-477c-8fce-45998551949b/volumes" Jan 28 15:22:13 crc kubenswrapper[4981]: E0128 15:22:13.541652 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 28 15:22:13 crc kubenswrapper[4981]: E0128 15:22:13.542132 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nbfh595h5c5h596h5ch554h659h66ch9bh569h6bh587h5b5h5cbh5bbhdch9ch5bh5d6h5bh5d7h699h99h68bh545hbbhfdh5dh5d4h595hd7h58bq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sl2c9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-dd4f7ddbc-vtfnj_openstack(464cfecb-1d69-466f-90f1-9d9ac1166673): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:22:13 crc kubenswrapper[4981]: E0128 15:22:13.545541 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-dd4f7ddbc-vtfnj" podUID="464cfecb-1d69-466f-90f1-9d9ac1166673" Jan 28 15:22:17 crc kubenswrapper[4981]: I0128 15:22:17.557254 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" podUID="200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Jan 28 15:22:22 crc kubenswrapper[4981]: I0128 15:22:22.557280 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" podUID="200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: connect: connection refused" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.140358 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.146681 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.152617 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.234732 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3afe70ea-3dbf-495d-99ab-c2d6af72d624-scripts\") pod \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.234978 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0e6751ea-45b4-493d-8a07-88d32d84625a-horizon-secret-key\") pod \"0e6751ea-45b4-493d-8a07-88d32d84625a\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.235106 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afe70ea-3dbf-495d-99ab-c2d6af72d624-logs\") pod \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.235227 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbxgj\" (UniqueName: \"kubernetes.io/projected/3afe70ea-3dbf-495d-99ab-c2d6af72d624-kube-api-access-qbxgj\") pod \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.235346 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-scripts\") pod \"98b74a60-bac3-481c-bb29-48e58ee43fed\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.235457 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3afe70ea-3dbf-495d-99ab-c2d6af72d624-config-data\") pod \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.235252 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3afe70ea-3dbf-495d-99ab-c2d6af72d624-scripts" (OuterVolumeSpecName: "scripts") pod "3afe70ea-3dbf-495d-99ab-c2d6af72d624" (UID: "3afe70ea-3dbf-495d-99ab-c2d6af72d624"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.235516 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3afe70ea-3dbf-495d-99ab-c2d6af72d624-logs" (OuterVolumeSpecName: "logs") pod "3afe70ea-3dbf-495d-99ab-c2d6af72d624" (UID: "3afe70ea-3dbf-495d-99ab-c2d6af72d624"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.235666 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-fernet-keys\") pod \"98b74a60-bac3-481c-bb29-48e58ee43fed\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.235797 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-config-data\") pod \"98b74a60-bac3-481c-bb29-48e58ee43fed\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.235885 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj97z\" (UniqueName: \"kubernetes.io/projected/98b74a60-bac3-481c-bb29-48e58ee43fed-kube-api-access-pj97z\") pod \"98b74a60-bac3-481c-bb29-48e58ee43fed\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.235987 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-combined-ca-bundle\") pod \"98b74a60-bac3-481c-bb29-48e58ee43fed\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.236100 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e6751ea-45b4-493d-8a07-88d32d84625a-logs\") pod \"0e6751ea-45b4-493d-8a07-88d32d84625a\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.236208 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e6751ea-45b4-493d-8a07-88d32d84625a-config-data\") pod \"0e6751ea-45b4-493d-8a07-88d32d84625a\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.236323 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3afe70ea-3dbf-495d-99ab-c2d6af72d624-horizon-secret-key\") pod \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\" (UID: \"3afe70ea-3dbf-495d-99ab-c2d6af72d624\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.236462 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-credential-keys\") pod \"98b74a60-bac3-481c-bb29-48e58ee43fed\" (UID: \"98b74a60-bac3-481c-bb29-48e58ee43fed\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.236867 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e6751ea-45b4-493d-8a07-88d32d84625a-scripts\") pod \"0e6751ea-45b4-493d-8a07-88d32d84625a\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.237006 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kvm2\" (UniqueName: \"kubernetes.io/projected/0e6751ea-45b4-493d-8a07-88d32d84625a-kube-api-access-7kvm2\") pod \"0e6751ea-45b4-493d-8a07-88d32d84625a\" (UID: \"0e6751ea-45b4-493d-8a07-88d32d84625a\") " Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.237852 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3afe70ea-3dbf-495d-99ab-c2d6af72d624-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.237999 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3afe70ea-3dbf-495d-99ab-c2d6af72d624-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.240447 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e6751ea-45b4-493d-8a07-88d32d84625a-logs" (OuterVolumeSpecName: "logs") pod "0e6751ea-45b4-493d-8a07-88d32d84625a" (UID: "0e6751ea-45b4-493d-8a07-88d32d84625a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.240998 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e6751ea-45b4-493d-8a07-88d32d84625a-scripts" (OuterVolumeSpecName: "scripts") pod "0e6751ea-45b4-493d-8a07-88d32d84625a" (UID: "0e6751ea-45b4-493d-8a07-88d32d84625a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.242141 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e6751ea-45b4-493d-8a07-88d32d84625a-config-data" (OuterVolumeSpecName: "config-data") pod "0e6751ea-45b4-493d-8a07-88d32d84625a" (UID: "0e6751ea-45b4-493d-8a07-88d32d84625a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.242922 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3afe70ea-3dbf-495d-99ab-c2d6af72d624-config-data" (OuterVolumeSpecName: "config-data") pod "3afe70ea-3dbf-495d-99ab-c2d6af72d624" (UID: "3afe70ea-3dbf-495d-99ab-c2d6af72d624"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.245972 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3afe70ea-3dbf-495d-99ab-c2d6af72d624-kube-api-access-qbxgj" (OuterVolumeSpecName: "kube-api-access-qbxgj") pod "3afe70ea-3dbf-495d-99ab-c2d6af72d624" (UID: "3afe70ea-3dbf-495d-99ab-c2d6af72d624"). InnerVolumeSpecName "kube-api-access-qbxgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.248183 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e6751ea-45b4-493d-8a07-88d32d84625a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0e6751ea-45b4-493d-8a07-88d32d84625a" (UID: "0e6751ea-45b4-493d-8a07-88d32d84625a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.252414 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "98b74a60-bac3-481c-bb29-48e58ee43fed" (UID: "98b74a60-bac3-481c-bb29-48e58ee43fed"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.252526 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e6751ea-45b4-493d-8a07-88d32d84625a-kube-api-access-7kvm2" (OuterVolumeSpecName: "kube-api-access-7kvm2") pod "0e6751ea-45b4-493d-8a07-88d32d84625a" (UID: "0e6751ea-45b4-493d-8a07-88d32d84625a"). InnerVolumeSpecName "kube-api-access-7kvm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.252533 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3afe70ea-3dbf-495d-99ab-c2d6af72d624-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3afe70ea-3dbf-495d-99ab-c2d6af72d624" (UID: "3afe70ea-3dbf-495d-99ab-c2d6af72d624"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.253136 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-scripts" (OuterVolumeSpecName: "scripts") pod "98b74a60-bac3-481c-bb29-48e58ee43fed" (UID: "98b74a60-bac3-481c-bb29-48e58ee43fed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.267581 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98b74a60-bac3-481c-bb29-48e58ee43fed-kube-api-access-pj97z" (OuterVolumeSpecName: "kube-api-access-pj97z") pod "98b74a60-bac3-481c-bb29-48e58ee43fed" (UID: "98b74a60-bac3-481c-bb29-48e58ee43fed"). InnerVolumeSpecName "kube-api-access-pj97z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.276829 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "98b74a60-bac3-481c-bb29-48e58ee43fed" (UID: "98b74a60-bac3-481c-bb29-48e58ee43fed"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.278743 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-config-data" (OuterVolumeSpecName: "config-data") pod "98b74a60-bac3-481c-bb29-48e58ee43fed" (UID: "98b74a60-bac3-481c-bb29-48e58ee43fed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.281566 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98b74a60-bac3-481c-bb29-48e58ee43fed" (UID: "98b74a60-bac3-481c-bb29-48e58ee43fed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.343479 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3afe70ea-3dbf-495d-99ab-c2d6af72d624-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.343531 4981 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.343549 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.343569 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj97z\" (UniqueName: \"kubernetes.io/projected/98b74a60-bac3-481c-bb29-48e58ee43fed-kube-api-access-pj97z\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.343590 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.343609 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0e6751ea-45b4-493d-8a07-88d32d84625a-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.343627 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0e6751ea-45b4-493d-8a07-88d32d84625a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.343649 4981 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3afe70ea-3dbf-495d-99ab-c2d6af72d624-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.343671 4981 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.343694 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0e6751ea-45b4-493d-8a07-88d32d84625a-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.343716 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kvm2\" (UniqueName: \"kubernetes.io/projected/0e6751ea-45b4-493d-8a07-88d32d84625a-kube-api-access-7kvm2\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.343738 4981 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0e6751ea-45b4-493d-8a07-88d32d84625a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.343759 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbxgj\" (UniqueName: \"kubernetes.io/projected/3afe70ea-3dbf-495d-99ab-c2d6af72d624-kube-api-access-qbxgj\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.343780 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98b74a60-bac3-481c-bb29-48e58ee43fed-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.414151 4981 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod5b8f224d-ac44-43c1-bf59-d16858e364ec"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod5b8f224d-ac44-43c1-bf59-d16858e364ec] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5b8f224d_ac44_43c1_bf59_d16858e364ec.slice" Jan 28 15:22:23 crc kubenswrapper[4981]: E0128 15:22:23.414263 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod5b8f224d-ac44-43c1-bf59-d16858e364ec] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod5b8f224d-ac44-43c1-bf59-d16858e364ec] : Timed out while waiting for systemd to remove kubepods-besteffort-pod5b8f224d_ac44_43c1_bf59_d16858e364ec.slice" pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" podUID="5b8f224d-ac44-43c1-bf59-d16858e364ec" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.671493 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7cb9bfb74f-ppc62" event={"ID":"0e6751ea-45b4-493d-8a07-88d32d84625a","Type":"ContainerDied","Data":"dbc5e910ca9c040e4e4e4abca239022c4eb40b28ce743707e5b65e6504930dd2"} Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.671527 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7cb9bfb74f-ppc62" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.674252 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kcxvv" event={"ID":"98b74a60-bac3-481c-bb29-48e58ee43fed","Type":"ContainerDied","Data":"038e9f39554e3fa15ed4de4f9adddfb0484167980744d39f0fff83476d974528"} Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.674266 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kcxvv" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.674587 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="038e9f39554e3fa15ed4de4f9adddfb0484167980744d39f0fff83476d974528" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.676363 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-d54b7587-prgkj" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.676375 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-d54b7587-prgkj" event={"ID":"3afe70ea-3dbf-495d-99ab-c2d6af72d624","Type":"ContainerDied","Data":"54fc8ae2ae995327b7798e2e7153031d8cba03a74a4cb3b6ff29502e0f173e38"} Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.676378 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55fff446b9-7xzlx" Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.754524 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-d54b7587-prgkj"] Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.769248 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-d54b7587-prgkj"] Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.797130 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-7xzlx"] Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.804515 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55fff446b9-7xzlx"] Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.818732 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7cb9bfb74f-ppc62"] Jan 28 15:22:23 crc kubenswrapper[4981]: I0128 15:22:23.837341 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7cb9bfb74f-ppc62"] Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.320366 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kcxvv"] Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.327789 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kcxvv"] Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.342691 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5mpwx"] Jan 28 15:22:24 crc kubenswrapper[4981]: E0128 15:22:24.343233 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98b74a60-bac3-481c-bb29-48e58ee43fed" containerName="keystone-bootstrap" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.343250 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="98b74a60-bac3-481c-bb29-48e58ee43fed" containerName="keystone-bootstrap" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.343478 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="98b74a60-bac3-481c-bb29-48e58ee43fed" containerName="keystone-bootstrap" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.344175 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.346381 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.346846 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.346869 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.347215 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.347264 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-28pgk" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.354956 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5mpwx"] Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.463982 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-scripts\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.464068 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-credential-keys\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.464136 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prk59\" (UniqueName: \"kubernetes.io/projected/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-kube-api-access-prk59\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.464162 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-config-data\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.464178 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-combined-ca-bundle\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.464252 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-fernet-keys\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.565445 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-credential-keys\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.565536 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prk59\" (UniqueName: \"kubernetes.io/projected/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-kube-api-access-prk59\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.565564 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-config-data\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.565580 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-combined-ca-bundle\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.565648 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-fernet-keys\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.565715 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-scripts\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.573587 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-combined-ca-bundle\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.573874 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-credential-keys\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.576144 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-config-data\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.576444 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-scripts\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.587229 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prk59\" (UniqueName: \"kubernetes.io/projected/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-kube-api-access-prk59\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.595327 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-fernet-keys\") pod \"keystone-bootstrap-5mpwx\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:24 crc kubenswrapper[4981]: I0128 15:22:24.699994 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:25 crc kubenswrapper[4981]: I0128 15:22:25.329439 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e6751ea-45b4-493d-8a07-88d32d84625a" path="/var/lib/kubelet/pods/0e6751ea-45b4-493d-8a07-88d32d84625a/volumes" Jan 28 15:22:25 crc kubenswrapper[4981]: I0128 15:22:25.330283 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3afe70ea-3dbf-495d-99ab-c2d6af72d624" path="/var/lib/kubelet/pods/3afe70ea-3dbf-495d-99ab-c2d6af72d624/volumes" Jan 28 15:22:25 crc kubenswrapper[4981]: I0128 15:22:25.330790 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b8f224d-ac44-43c1-bf59-d16858e364ec" path="/var/lib/kubelet/pods/5b8f224d-ac44-43c1-bf59-d16858e364ec/volumes" Jan 28 15:22:25 crc kubenswrapper[4981]: I0128 15:22:25.331455 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98b74a60-bac3-481c-bb29-48e58ee43fed" path="/var/lib/kubelet/pods/98b74a60-bac3-481c-bb29-48e58ee43fed/volumes" Jan 28 15:22:26 crc kubenswrapper[4981]: E0128 15:22:26.499454 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 28 15:22:26 crc kubenswrapper[4981]: E0128 15:22:26.499647 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zs4hx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-t6ht4_openstack(7722b5f2-e226-483f-9ae3-d2b5a9e5a605): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:22:26 crc kubenswrapper[4981]: E0128 15:22:26.500837 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-t6ht4" podUID="7722b5f2-e226-483f-9ae3-d2b5a9e5a605" Jan 28 15:22:26 crc kubenswrapper[4981]: E0128 15:22:26.709219 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-t6ht4" podUID="7722b5f2-e226-483f-9ae3-d2b5a9e5a605" Jan 28 15:22:26 crc kubenswrapper[4981]: E0128 15:22:26.868517 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 28 15:22:26 crc kubenswrapper[4981]: E0128 15:22:26.869000 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5dh77hcbh66bhffh94h64bh576h658h584h58fhdh66dhd6h687h5b4h5c4h556hc9h659hcch77hf9hbfh9h5c4hd6h574h655h694h57fhbbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pst9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3b1b3aa1-aaf2-4291-b735-00fc0ca3b455): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:22:26 crc kubenswrapper[4981]: I0128 15:22:26.966865 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.110074 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/464cfecb-1d69-466f-90f1-9d9ac1166673-logs\") pod \"464cfecb-1d69-466f-90f1-9d9ac1166673\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.110155 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/464cfecb-1d69-466f-90f1-9d9ac1166673-horizon-secret-key\") pod \"464cfecb-1d69-466f-90f1-9d9ac1166673\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.110179 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/464cfecb-1d69-466f-90f1-9d9ac1166673-config-data\") pod \"464cfecb-1d69-466f-90f1-9d9ac1166673\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.110263 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl2c9\" (UniqueName: \"kubernetes.io/projected/464cfecb-1d69-466f-90f1-9d9ac1166673-kube-api-access-sl2c9\") pod \"464cfecb-1d69-466f-90f1-9d9ac1166673\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.110298 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/464cfecb-1d69-466f-90f1-9d9ac1166673-scripts\") pod \"464cfecb-1d69-466f-90f1-9d9ac1166673\" (UID: \"464cfecb-1d69-466f-90f1-9d9ac1166673\") " Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.111049 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464cfecb-1d69-466f-90f1-9d9ac1166673-scripts" (OuterVolumeSpecName: "scripts") pod "464cfecb-1d69-466f-90f1-9d9ac1166673" (UID: "464cfecb-1d69-466f-90f1-9d9ac1166673"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.111095 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/464cfecb-1d69-466f-90f1-9d9ac1166673-config-data" (OuterVolumeSpecName: "config-data") pod "464cfecb-1d69-466f-90f1-9d9ac1166673" (UID: "464cfecb-1d69-466f-90f1-9d9ac1166673"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.111277 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/464cfecb-1d69-466f-90f1-9d9ac1166673-logs" (OuterVolumeSpecName: "logs") pod "464cfecb-1d69-466f-90f1-9d9ac1166673" (UID: "464cfecb-1d69-466f-90f1-9d9ac1166673"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.118006 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/464cfecb-1d69-466f-90f1-9d9ac1166673-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "464cfecb-1d69-466f-90f1-9d9ac1166673" (UID: "464cfecb-1d69-466f-90f1-9d9ac1166673"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.122962 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464cfecb-1d69-466f-90f1-9d9ac1166673-kube-api-access-sl2c9" (OuterVolumeSpecName: "kube-api-access-sl2c9") pod "464cfecb-1d69-466f-90f1-9d9ac1166673" (UID: "464cfecb-1d69-466f-90f1-9d9ac1166673"). InnerVolumeSpecName "kube-api-access-sl2c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.211986 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/464cfecb-1d69-466f-90f1-9d9ac1166673-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.212020 4981 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/464cfecb-1d69-466f-90f1-9d9ac1166673-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.212031 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/464cfecb-1d69-466f-90f1-9d9ac1166673-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.212039 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl2c9\" (UniqueName: \"kubernetes.io/projected/464cfecb-1d69-466f-90f1-9d9ac1166673-kube-api-access-sl2c9\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.212049 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/464cfecb-1d69-466f-90f1-9d9ac1166673-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.309314 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-54f45b9c5b-drxcg"] Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.715883 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-dd4f7ddbc-vtfnj" event={"ID":"464cfecb-1d69-466f-90f1-9d9ac1166673","Type":"ContainerDied","Data":"fda0cc754d907a974336901c63b9942c13be90e41744747c612d4f79112eb39e"} Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.715958 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-dd4f7ddbc-vtfnj" Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.764117 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-dd4f7ddbc-vtfnj"] Jan 28 15:22:27 crc kubenswrapper[4981]: I0128 15:22:27.773699 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-dd4f7ddbc-vtfnj"] Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.008328 4981 scope.go:117] "RemoveContainer" containerID="739ec6b9d11796379430bfea0145e649df712293a6aa8b3459fa9836acef4d6a" Jan 28 15:22:28 crc kubenswrapper[4981]: E0128 15:22:28.069983 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 28 15:22:28 crc kubenswrapper[4981]: E0128 15:22:28.070499 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4rbls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-h8htg_openstack(5a747315-c181-4459-ae1d-3c0c5252efb7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:22:28 crc kubenswrapper[4981]: E0128 15:22:28.072474 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-h8htg" podUID="5a747315-c181-4459-ae1d-3c0c5252efb7" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.201804 4981 scope.go:117] "RemoveContainer" containerID="1400fdd1740fec5667aa1a20024c06de82ce3ec38e36b5d30951c438a25c329c" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.247206 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.279288 4981 scope.go:117] "RemoveContainer" containerID="46ae4addff2217ea4b9f2550bb6a0c9d004f0bb0598d685dfec010f6956885f0" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.333964 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-ovsdbserver-nb\") pod \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.334024 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcksv\" (UniqueName: \"kubernetes.io/projected/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-kube-api-access-jcksv\") pod \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.334047 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-config\") pod \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.334541 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-dns-swift-storage-0\") pod \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.334662 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-ovsdbserver-sb\") pod \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.334685 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-dns-svc\") pod \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\" (UID: \"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0\") " Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.339697 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-kube-api-access-jcksv" (OuterVolumeSpecName: "kube-api-access-jcksv") pod "200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" (UID: "200b2bf9-ee0c-42d2-9307-d6f0868cd3e0"). InnerVolumeSpecName "kube-api-access-jcksv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.376169 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" (UID: "200b2bf9-ee0c-42d2-9307-d6f0868cd3e0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.385002 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-config" (OuterVolumeSpecName: "config") pod "200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" (UID: "200b2bf9-ee0c-42d2-9307-d6f0868cd3e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.396595 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" (UID: "200b2bf9-ee0c-42d2-9307-d6f0868cd3e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.412246 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" (UID: "200b2bf9-ee0c-42d2-9307-d6f0868cd3e0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.428628 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" (UID: "200b2bf9-ee0c-42d2-9307-d6f0868cd3e0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.436401 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.436431 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.436445 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcksv\" (UniqueName: \"kubernetes.io/projected/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-kube-api-access-jcksv\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.436461 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.436473 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.436483 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.525713 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d9d89fcfb-mwsgh"] Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.589316 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.643535 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5mpwx"] Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.738612 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jd49r" event={"ID":"82101008-6112-4a68-8776-7a2c896b5eab","Type":"ContainerStarted","Data":"21bd324287996431c55b2bf3804ea5d22ed628d8e31df40fee954b5b069d4c92"} Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.739539 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.741610 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.741643 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" event={"ID":"200b2bf9-ee0c-42d2-9307-d6f0868cd3e0","Type":"ContainerDied","Data":"efe0c946197296eca72d0fb2155ffbb0feb1ee4c495692f24c3f5f0dabbb4fc4"} Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.741701 4981 scope.go:117] "RemoveContainer" containerID="ab2a0c3b00b1e84b7c8d81cf457df0ea9d9a59e155067872470da7822793dd35" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.747205 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54f45b9c5b-drxcg" event={"ID":"465c6840-8900-4520-b80a-aab52f45c173","Type":"ContainerStarted","Data":"aa234ada1f8c83e06fe3243406375953ce8cd67774514a2666a26efecc4b530a"} Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.752162 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-jd49r" podStartSLOduration=2.610959593 podStartE2EDuration="38.752143937s" podCreationTimestamp="2026-01-28 15:21:50 +0000 UTC" firstStartedPulling="2026-01-28 15:21:51.84504343 +0000 UTC m=+1123.297201671" lastFinishedPulling="2026-01-28 15:22:27.986227754 +0000 UTC m=+1159.438386015" observedRunningTime="2026-01-28 15:22:28.749876308 +0000 UTC m=+1160.202034549" watchObservedRunningTime="2026-01-28 15:22:28.752143937 +0000 UTC m=+1160.204302188" Jan 28 15:22:28 crc kubenswrapper[4981]: E0128 15:22:28.752778 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-h8htg" podUID="5a747315-c181-4459-ae1d-3c0c5252efb7" Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.788477 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-k8fts"] Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.798039 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-77585f5f8c-k8fts"] Jan 28 15:22:28 crc kubenswrapper[4981]: I0128 15:22:28.876333 4981 scope.go:117] "RemoveContainer" containerID="93282a6f9f134d74734185bd38f2556ec80ebdf2e5b6861a5bc306e450d45d19" Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.341684 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" path="/var/lib/kubelet/pods/200b2bf9-ee0c-42d2-9307-d6f0868cd3e0/volumes" Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.343221 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="464cfecb-1d69-466f-90f1-9d9ac1166673" path="/var/lib/kubelet/pods/464cfecb-1d69-466f-90f1-9d9ac1166673/volumes" Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.765402 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455","Type":"ContainerStarted","Data":"82d20e501ba1cc8b70db300d5117f005d6119aab41213d8f924cfdc5b62f3094"} Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.777861 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf","Type":"ContainerStarted","Data":"bebd08ae06e0b4910da4cd505ee93d126322a0a659d8d14bc8e0164e6c8cbccc"} Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.777896 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf","Type":"ContainerStarted","Data":"9a4942493b50881421cbfe4857c7b8fa1be0cb02a3604f45817305ed2f68c764"} Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.782524 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba373618-9613-48d3-9023-ce519f54fb7f","Type":"ContainerStarted","Data":"d810d5ee9d29efe6acb870a3e3f5e1310ce05c5bd1e8da15cfd060324fb7a8f8"} Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.782570 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba373618-9613-48d3-9023-ce519f54fb7f","Type":"ContainerStarted","Data":"291833be7fed236c8947a2e228d0acd10e0dff79da94771cf435760c350eb095"} Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.787846 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54f45b9c5b-drxcg" event={"ID":"465c6840-8900-4520-b80a-aab52f45c173","Type":"ContainerStarted","Data":"9b99100a343508bba0a99a5e7f2da53ebd6eef781ca2ce91d1729c68522926f0"} Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.787868 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54f45b9c5b-drxcg" event={"ID":"465c6840-8900-4520-b80a-aab52f45c173","Type":"ContainerStarted","Data":"0efb8f5d9806b6b93762b01f054f77ce9c400ef76a1e729a9449091684c8c2bc"} Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.798426 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d9d89fcfb-mwsgh" event={"ID":"d02db79a-7f4f-453c-8e92-2e8291f442f1","Type":"ContainerStarted","Data":"98e1fd876faffb36a06ceff3cc585ed236e27f1b22cdd97642d0db9c62ea0554"} Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.798461 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d9d89fcfb-mwsgh" event={"ID":"d02db79a-7f4f-453c-8e92-2e8291f442f1","Type":"ContainerStarted","Data":"834c564f4d52165ecc29e663755b5f99e995424f7081c32456fb511d5877e930"} Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.798470 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d9d89fcfb-mwsgh" event={"ID":"d02db79a-7f4f-453c-8e92-2e8291f442f1","Type":"ContainerStarted","Data":"1e85ac7dac063aafcca5ccddbdef100ef85ef52bd190d70816ef45e557b0c036"} Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.829103 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5mpwx" event={"ID":"ca96a8b3-3c53-43ed-bd39-f0bb55a04250","Type":"ContainerStarted","Data":"c5944497a2ac4ceb53b6cbe937cd24e7e3dcf97274ecb387b492b2513b8a7298"} Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.829148 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5mpwx" event={"ID":"ca96a8b3-3c53-43ed-bd39-f0bb55a04250","Type":"ContainerStarted","Data":"60e0c2e0867ff5af68f08296a7b151901adcbebc1183843b90480b442c867994"} Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.939855 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-54f45b9c5b-drxcg" podStartSLOduration=28.498316031 podStartE2EDuration="28.93983447s" podCreationTimestamp="2026-01-28 15:22:01 +0000 UTC" firstStartedPulling="2026-01-28 15:22:28.043311132 +0000 UTC m=+1159.495469383" lastFinishedPulling="2026-01-28 15:22:28.484829581 +0000 UTC m=+1159.936987822" observedRunningTime="2026-01-28 15:22:29.896477632 +0000 UTC m=+1161.348635873" watchObservedRunningTime="2026-01-28 15:22:29.93983447 +0000 UTC m=+1161.391992711" Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.942080 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5mpwx" podStartSLOduration=5.942069999 podStartE2EDuration="5.942069999s" podCreationTimestamp="2026-01-28 15:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:29.933899674 +0000 UTC m=+1161.386057915" watchObservedRunningTime="2026-01-28 15:22:29.942069999 +0000 UTC m=+1161.394228240" Jan 28 15:22:29 crc kubenswrapper[4981]: I0128 15:22:29.967456 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6d9d89fcfb-mwsgh" podStartSLOduration=28.967441305 podStartE2EDuration="28.967441305s" podCreationTimestamp="2026-01-28 15:22:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:29.956427886 +0000 UTC m=+1161.408586117" watchObservedRunningTime="2026-01-28 15:22:29.967441305 +0000 UTC m=+1161.419599546" Jan 28 15:22:30 crc kubenswrapper[4981]: I0128 15:22:30.838672 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf","Type":"ContainerStarted","Data":"d460fc23afba8c3d14a8a7e4e28bbd2cfcf4a79c8689e2e7ef1614c95984f872"} Jan 28 15:22:30 crc kubenswrapper[4981]: I0128 15:22:30.847267 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba373618-9613-48d3-9023-ce519f54fb7f","Type":"ContainerStarted","Data":"470893bbe07f2153e5d01594057a84bf22009bbbfdb4c5274c8239c021c7c3e8"} Jan 28 15:22:30 crc kubenswrapper[4981]: I0128 15:22:30.875180 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=19.875164168 podStartE2EDuration="19.875164168s" podCreationTimestamp="2026-01-28 15:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:30.871593725 +0000 UTC m=+1162.323751966" watchObservedRunningTime="2026-01-28 15:22:30.875164168 +0000 UTC m=+1162.327322409" Jan 28 15:22:30 crc kubenswrapper[4981]: I0128 15:22:30.909089 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=19.909072548 podStartE2EDuration="19.909072548s" podCreationTimestamp="2026-01-28 15:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:30.900628777 +0000 UTC m=+1162.352787018" watchObservedRunningTime="2026-01-28 15:22:30.909072548 +0000 UTC m=+1162.361230789" Jan 28 15:22:31 crc kubenswrapper[4981]: I0128 15:22:31.602828 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:31 crc kubenswrapper[4981]: I0128 15:22:31.602876 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:22:31 crc kubenswrapper[4981]: I0128 15:22:31.681286 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:31 crc kubenswrapper[4981]: I0128 15:22:31.681387 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:32 crc kubenswrapper[4981]: I0128 15:22:32.079270 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 15:22:32 crc kubenswrapper[4981]: I0128 15:22:32.079346 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 15:22:32 crc kubenswrapper[4981]: I0128 15:22:32.121426 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 15:22:32 crc kubenswrapper[4981]: I0128 15:22:32.131660 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 15:22:32 crc kubenswrapper[4981]: I0128 15:22:32.329115 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 15:22:32 crc kubenswrapper[4981]: I0128 15:22:32.329380 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 15:22:32 crc kubenswrapper[4981]: I0128 15:22:32.366066 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 15:22:32 crc kubenswrapper[4981]: I0128 15:22:32.384033 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 15:22:32 crc kubenswrapper[4981]: I0128 15:22:32.557536 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-77585f5f8c-k8fts" podUID="200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.125:5353: i/o timeout" Jan 28 15:22:32 crc kubenswrapper[4981]: I0128 15:22:32.866153 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 15:22:32 crc kubenswrapper[4981]: I0128 15:22:32.866203 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 15:22:32 crc kubenswrapper[4981]: I0128 15:22:32.866214 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 15:22:32 crc kubenswrapper[4981]: I0128 15:22:32.866223 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 15:22:34 crc kubenswrapper[4981]: I0128 15:22:34.907574 4981 generic.go:334] "Generic (PLEG): container finished" podID="ca96a8b3-3c53-43ed-bd39-f0bb55a04250" containerID="c5944497a2ac4ceb53b6cbe937cd24e7e3dcf97274ecb387b492b2513b8a7298" exitCode=0 Jan 28 15:22:34 crc kubenswrapper[4981]: I0128 15:22:34.907898 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5mpwx" event={"ID":"ca96a8b3-3c53-43ed-bd39-f0bb55a04250","Type":"ContainerDied","Data":"c5944497a2ac4ceb53b6cbe937cd24e7e3dcf97274ecb387b492b2513b8a7298"} Jan 28 15:22:34 crc kubenswrapper[4981]: I0128 15:22:34.911049 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455","Type":"ContainerStarted","Data":"0e2c30c729c20f5fa057ed3c4f30df1fb70059bff90e303470440c08f8ca68ee"} Jan 28 15:22:35 crc kubenswrapper[4981]: I0128 15:22:35.943755 4981 generic.go:334] "Generic (PLEG): container finished" podID="82101008-6112-4a68-8776-7a2c896b5eab" containerID="21bd324287996431c55b2bf3804ea5d22ed628d8e31df40fee954b5b069d4c92" exitCode=0 Jan 28 15:22:35 crc kubenswrapper[4981]: I0128 15:22:35.943826 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jd49r" event={"ID":"82101008-6112-4a68-8776-7a2c896b5eab","Type":"ContainerDied","Data":"21bd324287996431c55b2bf3804ea5d22ed628d8e31df40fee954b5b069d4c92"} Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.333107 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.504582 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-config-data\") pod \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.504653 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-scripts\") pod \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.504688 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-credential-keys\") pod \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.504739 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-fernet-keys\") pod \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.504777 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prk59\" (UniqueName: \"kubernetes.io/projected/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-kube-api-access-prk59\") pod \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.504840 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-combined-ca-bundle\") pod \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\" (UID: \"ca96a8b3-3c53-43ed-bd39-f0bb55a04250\") " Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.509785 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-kube-api-access-prk59" (OuterVolumeSpecName: "kube-api-access-prk59") pod "ca96a8b3-3c53-43ed-bd39-f0bb55a04250" (UID: "ca96a8b3-3c53-43ed-bd39-f0bb55a04250"). InnerVolumeSpecName "kube-api-access-prk59". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.512078 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-scripts" (OuterVolumeSpecName: "scripts") pod "ca96a8b3-3c53-43ed-bd39-f0bb55a04250" (UID: "ca96a8b3-3c53-43ed-bd39-f0bb55a04250"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.513000 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ca96a8b3-3c53-43ed-bd39-f0bb55a04250" (UID: "ca96a8b3-3c53-43ed-bd39-f0bb55a04250"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.533380 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ca96a8b3-3c53-43ed-bd39-f0bb55a04250" (UID: "ca96a8b3-3c53-43ed-bd39-f0bb55a04250"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.538640 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ca96a8b3-3c53-43ed-bd39-f0bb55a04250" (UID: "ca96a8b3-3c53-43ed-bd39-f0bb55a04250"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.546308 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-config-data" (OuterVolumeSpecName: "config-data") pod "ca96a8b3-3c53-43ed-bd39-f0bb55a04250" (UID: "ca96a8b3-3c53-43ed-bd39-f0bb55a04250"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.606776 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.606810 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.606819 4981 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.606831 4981 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.606840 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prk59\" (UniqueName: \"kubernetes.io/projected/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-kube-api-access-prk59\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.606850 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca96a8b3-3c53-43ed-bd39-f0bb55a04250-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.972167 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5mpwx" Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.972713 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5mpwx" event={"ID":"ca96a8b3-3c53-43ed-bd39-f0bb55a04250","Type":"ContainerDied","Data":"60e0c2e0867ff5af68f08296a7b151901adcbebc1183843b90480b442c867994"} Jan 28 15:22:36 crc kubenswrapper[4981]: I0128 15:22:36.976864 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60e0c2e0867ff5af68f08296a7b151901adcbebc1183843b90480b442c867994" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.038044 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-569ff6748d-zhgp9"] Jan 28 15:22:37 crc kubenswrapper[4981]: E0128 15:22:37.050632 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" containerName="init" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.050667 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" containerName="init" Jan 28 15:22:37 crc kubenswrapper[4981]: E0128 15:22:37.050694 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca96a8b3-3c53-43ed-bd39-f0bb55a04250" containerName="keystone-bootstrap" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.050702 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca96a8b3-3c53-43ed-bd39-f0bb55a04250" containerName="keystone-bootstrap" Jan 28 15:22:37 crc kubenswrapper[4981]: E0128 15:22:37.050729 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" containerName="dnsmasq-dns" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.050736 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" containerName="dnsmasq-dns" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.051002 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca96a8b3-3c53-43ed-bd39-f0bb55a04250" containerName="keystone-bootstrap" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.051024 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="200b2bf9-ee0c-42d2-9307-d6f0868cd3e0" containerName="dnsmasq-dns" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.059773 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.061722 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-569ff6748d-zhgp9"] Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.063384 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.063617 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.063723 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-28pgk" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.063905 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.064141 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.064316 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.218032 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pz8f\" (UniqueName: \"kubernetes.io/projected/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-kube-api-access-9pz8f\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.218081 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-fernet-keys\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.218123 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-combined-ca-bundle\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.218140 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-scripts\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.218159 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-public-tls-certs\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.218177 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-config-data\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.218400 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-credential-keys\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.218422 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-internal-tls-certs\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.319425 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pz8f\" (UniqueName: \"kubernetes.io/projected/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-kube-api-access-9pz8f\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.319736 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-fernet-keys\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.319779 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-combined-ca-bundle\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.319795 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-scripts\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.319814 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-public-tls-certs\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.319833 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-config-data\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.319876 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-credential-keys\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.319893 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-internal-tls-certs\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.325487 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-internal-tls-certs\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.326794 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-scripts\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.327680 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jd49r" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.328420 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-config-data\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.329793 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-credential-keys\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.338999 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-combined-ca-bundle\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.339576 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-fernet-keys\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.344801 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pz8f\" (UniqueName: \"kubernetes.io/projected/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-kube-api-access-9pz8f\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.349005 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8f290ab1-489a-4b7e-9815-a6bd2a528f5e-public-tls-certs\") pod \"keystone-569ff6748d-zhgp9\" (UID: \"8f290ab1-489a-4b7e-9815-a6bd2a528f5e\") " pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.398063 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.426853 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82101008-6112-4a68-8776-7a2c896b5eab-combined-ca-bundle\") pod \"82101008-6112-4a68-8776-7a2c896b5eab\" (UID: \"82101008-6112-4a68-8776-7a2c896b5eab\") " Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.427008 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95cw8\" (UniqueName: \"kubernetes.io/projected/82101008-6112-4a68-8776-7a2c896b5eab-kube-api-access-95cw8\") pod \"82101008-6112-4a68-8776-7a2c896b5eab\" (UID: \"82101008-6112-4a68-8776-7a2c896b5eab\") " Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.427109 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82101008-6112-4a68-8776-7a2c896b5eab-db-sync-config-data\") pod \"82101008-6112-4a68-8776-7a2c896b5eab\" (UID: \"82101008-6112-4a68-8776-7a2c896b5eab\") " Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.439501 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82101008-6112-4a68-8776-7a2c896b5eab-kube-api-access-95cw8" (OuterVolumeSpecName: "kube-api-access-95cw8") pod "82101008-6112-4a68-8776-7a2c896b5eab" (UID: "82101008-6112-4a68-8776-7a2c896b5eab"). InnerVolumeSpecName "kube-api-access-95cw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.443493 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82101008-6112-4a68-8776-7a2c896b5eab-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "82101008-6112-4a68-8776-7a2c896b5eab" (UID: "82101008-6112-4a68-8776-7a2c896b5eab"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.506313 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82101008-6112-4a68-8776-7a2c896b5eab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82101008-6112-4a68-8776-7a2c896b5eab" (UID: "82101008-6112-4a68-8776-7a2c896b5eab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.544321 4981 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/82101008-6112-4a68-8776-7a2c896b5eab-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.544355 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82101008-6112-4a68-8776-7a2c896b5eab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.544366 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95cw8\" (UniqueName: \"kubernetes.io/projected/82101008-6112-4a68-8776-7a2c896b5eab-kube-api-access-95cw8\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.982344 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-569ff6748d-zhgp9"] Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.988425 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jd49r" event={"ID":"82101008-6112-4a68-8776-7a2c896b5eab","Type":"ContainerDied","Data":"af2a087fb3908d036ced8e80ce114f1f5d5803fc1221d45403538320d4ad1987"} Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.988459 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af2a087fb3908d036ced8e80ce114f1f5d5803fc1221d45403538320d4ad1987" Jan 28 15:22:37 crc kubenswrapper[4981]: I0128 15:22:37.988535 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jd49r" Jan 28 15:22:37 crc kubenswrapper[4981]: W0128 15:22:37.993699 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f290ab1_489a_4b7e_9815_a6bd2a528f5e.slice/crio-0b1d3b70d3c4e84b93abb132a256894c294e983a511dcfeb8ee581ab15295944 WatchSource:0}: Error finding container 0b1d3b70d3c4e84b93abb132a256894c294e983a511dcfeb8ee581ab15295944: Status 404 returned error can't find the container with id 0b1d3b70d3c4e84b93abb132a256894c294e983a511dcfeb8ee581ab15295944 Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.301243 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7bd55659d6-qkw24"] Jan 28 15:22:38 crc kubenswrapper[4981]: E0128 15:22:38.301816 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82101008-6112-4a68-8776-7a2c896b5eab" containerName="barbican-db-sync" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.301827 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="82101008-6112-4a68-8776-7a2c896b5eab" containerName="barbican-db-sync" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.302000 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="82101008-6112-4a68-8776-7a2c896b5eab" containerName="barbican-db-sync" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.302821 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.309330 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.309901 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.330629 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-7x9vx" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.373452 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-695f5b56f5-7h6s9"] Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.376915 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.381612 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.416617 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7bd55659d6-qkw24"] Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.481301 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9001e7fd-73ee-4169-a239-fa6452ac69d2-config-data\") pod \"barbican-keystone-listener-7bd55659d6-qkw24\" (UID: \"9001e7fd-73ee-4169-a239-fa6452ac69d2\") " pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.481407 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9001e7fd-73ee-4169-a239-fa6452ac69d2-logs\") pod \"barbican-keystone-listener-7bd55659d6-qkw24\" (UID: \"9001e7fd-73ee-4169-a239-fa6452ac69d2\") " pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.481426 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9001e7fd-73ee-4169-a239-fa6452ac69d2-config-data-custom\") pod \"barbican-keystone-listener-7bd55659d6-qkw24\" (UID: \"9001e7fd-73ee-4169-a239-fa6452ac69d2\") " pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.481447 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd7vp\" (UniqueName: \"kubernetes.io/projected/9001e7fd-73ee-4169-a239-fa6452ac69d2-kube-api-access-fd7vp\") pod \"barbican-keystone-listener-7bd55659d6-qkw24\" (UID: \"9001e7fd-73ee-4169-a239-fa6452ac69d2\") " pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.481463 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e2d1563-14e3-41bc-8830-51e28da77c5e-config-data-custom\") pod \"barbican-worker-695f5b56f5-7h6s9\" (UID: \"2e2d1563-14e3-41bc-8830-51e28da77c5e\") " pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.481528 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2d1563-14e3-41bc-8830-51e28da77c5e-combined-ca-bundle\") pod \"barbican-worker-695f5b56f5-7h6s9\" (UID: \"2e2d1563-14e3-41bc-8830-51e28da77c5e\") " pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.481545 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-275lg\" (UniqueName: \"kubernetes.io/projected/2e2d1563-14e3-41bc-8830-51e28da77c5e-kube-api-access-275lg\") pod \"barbican-worker-695f5b56f5-7h6s9\" (UID: \"2e2d1563-14e3-41bc-8830-51e28da77c5e\") " pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.481567 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2d1563-14e3-41bc-8830-51e28da77c5e-config-data\") pod \"barbican-worker-695f5b56f5-7h6s9\" (UID: \"2e2d1563-14e3-41bc-8830-51e28da77c5e\") " pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.481613 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9001e7fd-73ee-4169-a239-fa6452ac69d2-combined-ca-bundle\") pod \"barbican-keystone-listener-7bd55659d6-qkw24\" (UID: \"9001e7fd-73ee-4169-a239-fa6452ac69d2\") " pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.481628 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e2d1563-14e3-41bc-8830-51e28da77c5e-logs\") pod \"barbican-worker-695f5b56f5-7h6s9\" (UID: \"2e2d1563-14e3-41bc-8830-51e28da77c5e\") " pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.483798 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-695f5b56f5-7h6s9"] Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.513932 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-vmsvl"] Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.515350 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.532487 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-vmsvl"] Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.569653 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7ff86b68cd-spzqr"] Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.572782 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.575078 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.581363 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7ff86b68cd-spzqr"] Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.583246 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2d1563-14e3-41bc-8830-51e28da77c5e-combined-ca-bundle\") pod \"barbican-worker-695f5b56f5-7h6s9\" (UID: \"2e2d1563-14e3-41bc-8830-51e28da77c5e\") " pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.583292 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-275lg\" (UniqueName: \"kubernetes.io/projected/2e2d1563-14e3-41bc-8830-51e28da77c5e-kube-api-access-275lg\") pod \"barbican-worker-695f5b56f5-7h6s9\" (UID: \"2e2d1563-14e3-41bc-8830-51e28da77c5e\") " pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.583323 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2d1563-14e3-41bc-8830-51e28da77c5e-config-data\") pod \"barbican-worker-695f5b56f5-7h6s9\" (UID: \"2e2d1563-14e3-41bc-8830-51e28da77c5e\") " pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.583358 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9001e7fd-73ee-4169-a239-fa6452ac69d2-combined-ca-bundle\") pod \"barbican-keystone-listener-7bd55659d6-qkw24\" (UID: \"9001e7fd-73ee-4169-a239-fa6452ac69d2\") " pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.583377 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e2d1563-14e3-41bc-8830-51e28da77c5e-logs\") pod \"barbican-worker-695f5b56f5-7h6s9\" (UID: \"2e2d1563-14e3-41bc-8830-51e28da77c5e\") " pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.583415 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9001e7fd-73ee-4169-a239-fa6452ac69d2-config-data\") pod \"barbican-keystone-listener-7bd55659d6-qkw24\" (UID: \"9001e7fd-73ee-4169-a239-fa6452ac69d2\") " pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.583493 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9001e7fd-73ee-4169-a239-fa6452ac69d2-logs\") pod \"barbican-keystone-listener-7bd55659d6-qkw24\" (UID: \"9001e7fd-73ee-4169-a239-fa6452ac69d2\") " pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.583511 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9001e7fd-73ee-4169-a239-fa6452ac69d2-config-data-custom\") pod \"barbican-keystone-listener-7bd55659d6-qkw24\" (UID: \"9001e7fd-73ee-4169-a239-fa6452ac69d2\") " pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.583529 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e2d1563-14e3-41bc-8830-51e28da77c5e-config-data-custom\") pod \"barbican-worker-695f5b56f5-7h6s9\" (UID: \"2e2d1563-14e3-41bc-8830-51e28da77c5e\") " pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.583546 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd7vp\" (UniqueName: \"kubernetes.io/projected/9001e7fd-73ee-4169-a239-fa6452ac69d2-kube-api-access-fd7vp\") pod \"barbican-keystone-listener-7bd55659d6-qkw24\" (UID: \"9001e7fd-73ee-4169-a239-fa6452ac69d2\") " pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.585857 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e2d1563-14e3-41bc-8830-51e28da77c5e-logs\") pod \"barbican-worker-695f5b56f5-7h6s9\" (UID: \"2e2d1563-14e3-41bc-8830-51e28da77c5e\") " pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.596055 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9001e7fd-73ee-4169-a239-fa6452ac69d2-logs\") pod \"barbican-keystone-listener-7bd55659d6-qkw24\" (UID: \"9001e7fd-73ee-4169-a239-fa6452ac69d2\") " pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.599470 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e2d1563-14e3-41bc-8830-51e28da77c5e-combined-ca-bundle\") pod \"barbican-worker-695f5b56f5-7h6s9\" (UID: \"2e2d1563-14e3-41bc-8830-51e28da77c5e\") " pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.600231 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9001e7fd-73ee-4169-a239-fa6452ac69d2-config-data-custom\") pod \"barbican-keystone-listener-7bd55659d6-qkw24\" (UID: \"9001e7fd-73ee-4169-a239-fa6452ac69d2\") " pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.605250 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9001e7fd-73ee-4169-a239-fa6452ac69d2-config-data\") pod \"barbican-keystone-listener-7bd55659d6-qkw24\" (UID: \"9001e7fd-73ee-4169-a239-fa6452ac69d2\") " pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.605942 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9001e7fd-73ee-4169-a239-fa6452ac69d2-combined-ca-bundle\") pod \"barbican-keystone-listener-7bd55659d6-qkw24\" (UID: \"9001e7fd-73ee-4169-a239-fa6452ac69d2\") " pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.608747 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2e2d1563-14e3-41bc-8830-51e28da77c5e-config-data-custom\") pod \"barbican-worker-695f5b56f5-7h6s9\" (UID: \"2e2d1563-14e3-41bc-8830-51e28da77c5e\") " pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.609806 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd7vp\" (UniqueName: \"kubernetes.io/projected/9001e7fd-73ee-4169-a239-fa6452ac69d2-kube-api-access-fd7vp\") pod \"barbican-keystone-listener-7bd55659d6-qkw24\" (UID: \"9001e7fd-73ee-4169-a239-fa6452ac69d2\") " pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.628903 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-275lg\" (UniqueName: \"kubernetes.io/projected/2e2d1563-14e3-41bc-8830-51e28da77c5e-kube-api-access-275lg\") pod \"barbican-worker-695f5b56f5-7h6s9\" (UID: \"2e2d1563-14e3-41bc-8830-51e28da77c5e\") " pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.631247 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e2d1563-14e3-41bc-8830-51e28da77c5e-config-data\") pod \"barbican-worker-695f5b56f5-7h6s9\" (UID: \"2e2d1563-14e3-41bc-8830-51e28da77c5e\") " pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.666654 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.684415 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-combined-ca-bundle\") pod \"barbican-api-7ff86b68cd-spzqr\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.684468 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-config\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.684504 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.684521 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-586hg\" (UniqueName: \"kubernetes.io/projected/7f20dc76-f79f-4102-822b-a5e03bb18abc-kube-api-access-586hg\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.684536 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dbbj\" (UniqueName: \"kubernetes.io/projected/696f320e-3870-452a-ae89-f9ede235d6ce-kube-api-access-8dbbj\") pod \"barbican-api-7ff86b68cd-spzqr\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.684559 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-config-data-custom\") pod \"barbican-api-7ff86b68cd-spzqr\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.684597 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/696f320e-3870-452a-ae89-f9ede235d6ce-logs\") pod \"barbican-api-7ff86b68cd-spzqr\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.684641 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.684660 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.684674 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.684700 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-config-data\") pod \"barbican-api-7ff86b68cd-spzqr\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.728796 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-695f5b56f5-7h6s9" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.790814 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dbbj\" (UniqueName: \"kubernetes.io/projected/696f320e-3870-452a-ae89-f9ede235d6ce-kube-api-access-8dbbj\") pod \"barbican-api-7ff86b68cd-spzqr\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.791266 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-config-data-custom\") pod \"barbican-api-7ff86b68cd-spzqr\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.791558 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/696f320e-3870-452a-ae89-f9ede235d6ce-logs\") pod \"barbican-api-7ff86b68cd-spzqr\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.791804 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.792140 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.792293 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.798281 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-config-data\") pod \"barbican-api-7ff86b68cd-spzqr\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.798353 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-combined-ca-bundle\") pod \"barbican-api-7ff86b68cd-spzqr\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.798427 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-config\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.798499 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.798526 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-586hg\" (UniqueName: \"kubernetes.io/projected/7f20dc76-f79f-4102-822b-a5e03bb18abc-kube-api-access-586hg\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.795371 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-config-data-custom\") pod \"barbican-api-7ff86b68cd-spzqr\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.794972 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-dns-svc\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.792421 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/696f320e-3870-452a-ae89-f9ede235d6ce-logs\") pod \"barbican-api-7ff86b68cd-spzqr\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.794610 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-dns-swift-storage-0\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.800048 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-config\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.801097 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-ovsdbserver-nb\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.804324 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-ovsdbserver-sb\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.806028 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-config-data\") pod \"barbican-api-7ff86b68cd-spzqr\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.807366 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-combined-ca-bundle\") pod \"barbican-api-7ff86b68cd-spzqr\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.827741 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-586hg\" (UniqueName: \"kubernetes.io/projected/7f20dc76-f79f-4102-822b-a5e03bb18abc-kube-api-access-586hg\") pod \"dnsmasq-dns-59d5ff467f-vmsvl\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.834041 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dbbj\" (UniqueName: \"kubernetes.io/projected/696f320e-3870-452a-ae89-f9ede235d6ce-kube-api-access-8dbbj\") pod \"barbican-api-7ff86b68cd-spzqr\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.877129 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:38 crc kubenswrapper[4981]: I0128 15:22:38.987608 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:39 crc kubenswrapper[4981]: I0128 15:22:39.011401 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-569ff6748d-zhgp9" event={"ID":"8f290ab1-489a-4b7e-9815-a6bd2a528f5e","Type":"ContainerStarted","Data":"c3789da46b0576948421952363923a4c64396fef7a71470e6bb819afe1d41da1"} Jan 28 15:22:39 crc kubenswrapper[4981]: I0128 15:22:39.011459 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-569ff6748d-zhgp9" event={"ID":"8f290ab1-489a-4b7e-9815-a6bd2a528f5e","Type":"ContainerStarted","Data":"0b1d3b70d3c4e84b93abb132a256894c294e983a511dcfeb8ee581ab15295944"} Jan 28 15:22:39 crc kubenswrapper[4981]: I0128 15:22:39.012663 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:22:39 crc kubenswrapper[4981]: I0128 15:22:39.050304 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-569ff6748d-zhgp9" podStartSLOduration=3.050280979 podStartE2EDuration="3.050280979s" podCreationTimestamp="2026-01-28 15:22:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:39.03838826 +0000 UTC m=+1170.490546501" watchObservedRunningTime="2026-01-28 15:22:39.050280979 +0000 UTC m=+1170.502439220" Jan 28 15:22:39 crc kubenswrapper[4981]: I0128 15:22:39.233448 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7bd55659d6-qkw24"] Jan 28 15:22:39 crc kubenswrapper[4981]: I0128 15:22:39.371139 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-695f5b56f5-7h6s9"] Jan 28 15:22:39 crc kubenswrapper[4981]: I0128 15:22:39.561460 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-vmsvl"] Jan 28 15:22:39 crc kubenswrapper[4981]: I0128 15:22:39.734425 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7ff86b68cd-spzqr"] Jan 28 15:22:39 crc kubenswrapper[4981]: W0128 15:22:39.749967 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod696f320e_3870_452a_ae89_f9ede235d6ce.slice/crio-472b46319f3483b52c5e3f9d9108b5fa81d6d2b52f45863ec4bfe7494af182a2 WatchSource:0}: Error finding container 472b46319f3483b52c5e3f9d9108b5fa81d6d2b52f45863ec4bfe7494af182a2: Status 404 returned error can't find the container with id 472b46319f3483b52c5e3f9d9108b5fa81d6d2b52f45863ec4bfe7494af182a2 Jan 28 15:22:40 crc kubenswrapper[4981]: I0128 15:22:40.031176 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ff86b68cd-spzqr" event={"ID":"696f320e-3870-452a-ae89-f9ede235d6ce","Type":"ContainerStarted","Data":"472b46319f3483b52c5e3f9d9108b5fa81d6d2b52f45863ec4bfe7494af182a2"} Jan 28 15:22:40 crc kubenswrapper[4981]: I0128 15:22:40.052494 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" event={"ID":"9001e7fd-73ee-4169-a239-fa6452ac69d2","Type":"ContainerStarted","Data":"9ae52d1e7dbffe991322ddc157f8b003014f5b21cc8e80f46f2c3aaf5ba527b9"} Jan 28 15:22:40 crc kubenswrapper[4981]: I0128 15:22:40.070863 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" event={"ID":"7f20dc76-f79f-4102-822b-a5e03bb18abc","Type":"ContainerStarted","Data":"a20b15fa159003f0eb1c14167e208b7bee1105315355e1147bc1e70c9d7ccaa2"} Jan 28 15:22:40 crc kubenswrapper[4981]: I0128 15:22:40.070905 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" event={"ID":"7f20dc76-f79f-4102-822b-a5e03bb18abc","Type":"ContainerStarted","Data":"ee7e52aa41bee27c44958bfd14ecf3b006a85eecfd23b0e2203cbff37345d4a6"} Jan 28 15:22:40 crc kubenswrapper[4981]: I0128 15:22:40.072129 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-695f5b56f5-7h6s9" event={"ID":"2e2d1563-14e3-41bc-8830-51e28da77c5e","Type":"ContainerStarted","Data":"e39d559185a228544fe4996eff6a4e589be4e18b0df776306ce1202b687d4c57"} Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.081142 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ff86b68cd-spzqr" event={"ID":"696f320e-3870-452a-ae89-f9ede235d6ce","Type":"ContainerStarted","Data":"5d4c73e6896f4191210dc0ee4cc60e2574c4b39550e18b3fb690ba77550cc8ff"} Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.082781 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.082947 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.083057 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ff86b68cd-spzqr" event={"ID":"696f320e-3870-452a-ae89-f9ede235d6ce","Type":"ContainerStarted","Data":"fa1070c7ca2db5c93c46829a7521406195db19e4446552bab9c3c23a0d509335"} Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.084310 4981 generic.go:334] "Generic (PLEG): container finished" podID="7f20dc76-f79f-4102-822b-a5e03bb18abc" containerID="a20b15fa159003f0eb1c14167e208b7bee1105315355e1147bc1e70c9d7ccaa2" exitCode=0 Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.084458 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" event={"ID":"7f20dc76-f79f-4102-822b-a5e03bb18abc","Type":"ContainerDied","Data":"a20b15fa159003f0eb1c14167e208b7bee1105315355e1147bc1e70c9d7ccaa2"} Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.103852 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7ff86b68cd-spzqr" podStartSLOduration=3.103831403 podStartE2EDuration="3.103831403s" podCreationTimestamp="2026-01-28 15:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:41.103631708 +0000 UTC m=+1172.555789949" watchObservedRunningTime="2026-01-28 15:22:41.103831403 +0000 UTC m=+1172.555989644" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.243128 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-f6d8cf6db-mshgs"] Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.244772 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.252921 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f6d8cf6db-mshgs"] Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.254780 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.255102 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.356483 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-config-data-custom\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.356878 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-public-tls-certs\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.357057 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6z4v\" (UniqueName: \"kubernetes.io/projected/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-kube-api-access-h6z4v\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.357229 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-config-data\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.357376 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-internal-tls-certs\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.357509 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-logs\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.357610 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-combined-ca-bundle\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.459153 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-config-data-custom\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.459415 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-public-tls-certs\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.459522 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6z4v\" (UniqueName: \"kubernetes.io/projected/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-kube-api-access-h6z4v\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.459629 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-config-data\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.459718 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-internal-tls-certs\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.459808 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-logs\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.459891 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-combined-ca-bundle\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.464797 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-logs\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.468179 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-public-tls-certs\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.469164 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-combined-ca-bundle\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.470413 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-config-data\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.471733 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-internal-tls-certs\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.476863 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-config-data-custom\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.487356 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6z4v\" (UniqueName: \"kubernetes.io/projected/46c7dad2-c36e-4c3e-80f0-c6e3ec088723-kube-api-access-h6z4v\") pod \"barbican-api-f6d8cf6db-mshgs\" (UID: \"46c7dad2-c36e-4c3e-80f0-c6e3ec088723\") " pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.562421 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.606079 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-54f45b9c5b-drxcg" podUID="465c6840-8900-4520-b80a-aab52f45c173" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 28 15:22:41 crc kubenswrapper[4981]: I0128 15:22:41.687571 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6d9d89fcfb-mwsgh" podUID="d02db79a-7f4f-453c-8e92-2e8291f442f1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.148:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.148:8443: connect: connection refused" Jan 28 15:22:44 crc kubenswrapper[4981]: I0128 15:22:44.558564 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 15:22:44 crc kubenswrapper[4981]: I0128 15:22:44.832990 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 15:22:44 crc kubenswrapper[4981]: I0128 15:22:44.835474 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 15:22:45 crc kubenswrapper[4981]: I0128 15:22:45.111253 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 15:22:50 crc kubenswrapper[4981]: I0128 15:22:50.464800 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:50 crc kubenswrapper[4981]: I0128 15:22:50.469209 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:22:50 crc kubenswrapper[4981]: E0128 15:22:50.687059 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 28 15:22:50 crc kubenswrapper[4981]: E0128 15:22:50.687297 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9pst9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3b1b3aa1-aaf2-4291-b735-00fc0ca3b455): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 15:22:50 crc kubenswrapper[4981]: E0128 15:22:50.688571 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" Jan 28 15:22:51 crc kubenswrapper[4981]: I0128 15:22:51.186872 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" containerName="ceilometer-notification-agent" containerID="cri-o://82d20e501ba1cc8b70db300d5117f005d6119aab41213d8f924cfdc5b62f3094" gracePeriod=30 Jan 28 15:22:51 crc kubenswrapper[4981]: I0128 15:22:51.187503 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" event={"ID":"7f20dc76-f79f-4102-822b-a5e03bb18abc","Type":"ContainerStarted","Data":"6aecf75b68f1269efd0269b8d52309e0d75c28e0a0f68d29b4e17746131be3d3"} Jan 28 15:22:51 crc kubenswrapper[4981]: I0128 15:22:51.187541 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:51 crc kubenswrapper[4981]: I0128 15:22:51.188481 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" containerName="sg-core" containerID="cri-o://0e2c30c729c20f5fa057ed3c4f30df1fb70059bff90e303470440c08f8ca68ee" gracePeriod=30 Jan 28 15:22:51 crc kubenswrapper[4981]: I0128 15:22:51.251385 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-f6d8cf6db-mshgs"] Jan 28 15:22:51 crc kubenswrapper[4981]: I0128 15:22:51.261489 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" podStartSLOduration=13.261466698 podStartE2EDuration="13.261466698s" podCreationTimestamp="2026-01-28 15:22:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:51.240584295 +0000 UTC m=+1182.692742556" watchObservedRunningTime="2026-01-28 15:22:51.261466698 +0000 UTC m=+1182.713624939" Jan 28 15:22:51 crc kubenswrapper[4981]: I0128 15:22:51.603381 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-54f45b9c5b-drxcg" podUID="465c6840-8900-4520-b80a-aab52f45c173" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.147:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.147:8443: connect: connection refused" Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.211026 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-695f5b56f5-7h6s9" event={"ID":"2e2d1563-14e3-41bc-8830-51e28da77c5e","Type":"ContainerStarted","Data":"3dbabba65c288568b01ea87ce73310ea149d9f185201abdf7d17bf6fe6591772"} Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.211356 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-695f5b56f5-7h6s9" event={"ID":"2e2d1563-14e3-41bc-8830-51e28da77c5e","Type":"ContainerStarted","Data":"137f7e37f3f811a4844f6fe05c8e70b93e094687718029d257ed9267c5434a54"} Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.220239 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-h8htg" event={"ID":"5a747315-c181-4459-ae1d-3c0c5252efb7","Type":"ContainerStarted","Data":"ef41d371bc38090af3ea5f5d5b836da62de3f1eb3c0855d616e9eebdaa5e2145"} Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.222915 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f6d8cf6db-mshgs" event={"ID":"46c7dad2-c36e-4c3e-80f0-c6e3ec088723","Type":"ContainerStarted","Data":"60c5ff2c09d015dd52db28e515909f3e888f0ca0637ce5800013f0f4c662f104"} Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.222940 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f6d8cf6db-mshgs" event={"ID":"46c7dad2-c36e-4c3e-80f0-c6e3ec088723","Type":"ContainerStarted","Data":"e0133abb25a269b96b652226bb367eb62655ce948d7b7f74618fdce575deaa22"} Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.222952 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-f6d8cf6db-mshgs" event={"ID":"46c7dad2-c36e-4c3e-80f0-c6e3ec088723","Type":"ContainerStarted","Data":"36a5abb3368157fe58e2bc2ed0b7cc1d278a9ae75f559026afa432cbd43abfb3"} Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.226740 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.230248 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.238690 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t6ht4" event={"ID":"7722b5f2-e226-483f-9ae3-d2b5a9e5a605","Type":"ContainerStarted","Data":"75e50301a82d3a1906792190c6d192eff716c7b2d6bd903666385be3125b6794"} Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.247964 4981 generic.go:334] "Generic (PLEG): container finished" podID="3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" containerID="0e2c30c729c20f5fa057ed3c4f30df1fb70059bff90e303470440c08f8ca68ee" exitCode=2 Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.248048 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455","Type":"ContainerDied","Data":"0e2c30c729c20f5fa057ed3c4f30df1fb70059bff90e303470440c08f8ca68ee"} Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.253312 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" event={"ID":"9001e7fd-73ee-4169-a239-fa6452ac69d2","Type":"ContainerStarted","Data":"28dcb01f228ca5be1d28a267a79f964effcb571b7bbb21d0cf5a221ca7432aa2"} Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.253366 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" event={"ID":"9001e7fd-73ee-4169-a239-fa6452ac69d2","Type":"ContainerStarted","Data":"2f59f279435bc3a3f24bd3580b8fbdc482bba1c1c82f17a157afdf7a978b7d1e"} Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.261460 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-f6d8cf6db-mshgs" podStartSLOduration=11.261443392 podStartE2EDuration="11.261443392s" podCreationTimestamp="2026-01-28 15:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:22:52.258418153 +0000 UTC m=+1183.710576394" watchObservedRunningTime="2026-01-28 15:22:52.261443392 +0000 UTC m=+1183.713601633" Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.266555 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-695f5b56f5-7h6s9" podStartSLOduration=2.919061722 podStartE2EDuration="14.266537364s" podCreationTimestamp="2026-01-28 15:22:38 +0000 UTC" firstStartedPulling="2026-01-28 15:22:39.375088247 +0000 UTC m=+1170.827246488" lastFinishedPulling="2026-01-28 15:22:50.722563879 +0000 UTC m=+1182.174722130" observedRunningTime="2026-01-28 15:22:52.230531049 +0000 UTC m=+1183.682689290" watchObservedRunningTime="2026-01-28 15:22:52.266537364 +0000 UTC m=+1183.718695605" Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.286788 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-h8htg" podStartSLOduration=4.042972478 podStartE2EDuration="1m3.28677343s" podCreationTimestamp="2026-01-28 15:21:49 +0000 UTC" firstStartedPulling="2026-01-28 15:21:51.699173692 +0000 UTC m=+1123.151331933" lastFinishedPulling="2026-01-28 15:22:50.942974634 +0000 UTC m=+1182.395132885" observedRunningTime="2026-01-28 15:22:52.286528164 +0000 UTC m=+1183.738686405" watchObservedRunningTime="2026-01-28 15:22:52.28677343 +0000 UTC m=+1183.738931671" Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.309643 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-t6ht4" podStartSLOduration=3.240244636 podStartE2EDuration="1m2.309627654s" podCreationTimestamp="2026-01-28 15:21:50 +0000 UTC" firstStartedPulling="2026-01-28 15:21:51.825522478 +0000 UTC m=+1123.277680719" lastFinishedPulling="2026-01-28 15:22:50.894905496 +0000 UTC m=+1182.347063737" observedRunningTime="2026-01-28 15:22:52.306587865 +0000 UTC m=+1183.758746106" watchObservedRunningTime="2026-01-28 15:22:52.309627654 +0000 UTC m=+1183.761785895" Jan 28 15:22:52 crc kubenswrapper[4981]: I0128 15:22:52.336093 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7bd55659d6-qkw24" podStartSLOduration=2.861474667 podStartE2EDuration="14.336077721s" podCreationTimestamp="2026-01-28 15:22:38 +0000 UTC" firstStartedPulling="2026-01-28 15:22:39.266371413 +0000 UTC m=+1170.718529664" lastFinishedPulling="2026-01-28 15:22:50.740974477 +0000 UTC m=+1182.193132718" observedRunningTime="2026-01-28 15:22:52.332798696 +0000 UTC m=+1183.784956937" watchObservedRunningTime="2026-01-28 15:22:52.336077721 +0000 UTC m=+1183.788235962" Jan 28 15:22:54 crc kubenswrapper[4981]: I0128 15:22:54.289949 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:55 crc kubenswrapper[4981]: I0128 15:22:55.289353 4981 generic.go:334] "Generic (PLEG): container finished" podID="3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" containerID="82d20e501ba1cc8b70db300d5117f005d6119aab41213d8f924cfdc5b62f3094" exitCode=0 Jan 28 15:22:55 crc kubenswrapper[4981]: I0128 15:22:55.289463 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455","Type":"ContainerDied","Data":"82d20e501ba1cc8b70db300d5117f005d6119aab41213d8f924cfdc5b62f3094"} Jan 28 15:22:55 crc kubenswrapper[4981]: I0128 15:22:55.954046 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6d9d89fcfb-mwsgh" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.036513 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54f45b9c5b-drxcg"] Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.036732 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54f45b9c5b-drxcg" podUID="465c6840-8900-4520-b80a-aab52f45c173" containerName="horizon-log" containerID="cri-o://0efb8f5d9806b6b93762b01f054f77ce9c400ef76a1e729a9449091684c8c2bc" gracePeriod=30 Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.037076 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-54f45b9c5b-drxcg" podUID="465c6840-8900-4520-b80a-aab52f45c173" containerName="horizon" containerID="cri-o://9b99100a343508bba0a99a5e7f2da53ebd6eef781ca2ce91d1729c68522926f0" gracePeriod=30 Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.215602 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.279107 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-run-httpd\") pod \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.279153 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-log-httpd\") pod \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.279277 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-config-data\") pod \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.279311 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-combined-ca-bundle\") pod \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.279348 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-scripts\") pod \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.279431 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pst9\" (UniqueName: \"kubernetes.io/projected/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-kube-api-access-9pst9\") pod \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.279530 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-sg-core-conf-yaml\") pod \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\" (UID: \"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455\") " Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.279637 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" (UID: "3b1b3aa1-aaf2-4291-b735-00fc0ca3b455"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.279685 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" (UID: "3b1b3aa1-aaf2-4291-b735-00fc0ca3b455"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.280811 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.280904 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.286373 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-scripts" (OuterVolumeSpecName: "scripts") pod "3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" (UID: "3b1b3aa1-aaf2-4291-b735-00fc0ca3b455"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.286539 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-kube-api-access-9pst9" (OuterVolumeSpecName: "kube-api-access-9pst9") pod "3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" (UID: "3b1b3aa1-aaf2-4291-b735-00fc0ca3b455"). InnerVolumeSpecName "kube-api-access-9pst9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.301716 4981 generic.go:334] "Generic (PLEG): container finished" podID="465c6840-8900-4520-b80a-aab52f45c173" containerID="9b99100a343508bba0a99a5e7f2da53ebd6eef781ca2ce91d1729c68522926f0" exitCode=0 Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.301757 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54f45b9c5b-drxcg" event={"ID":"465c6840-8900-4520-b80a-aab52f45c173","Type":"ContainerDied","Data":"9b99100a343508bba0a99a5e7f2da53ebd6eef781ca2ce91d1729c68522926f0"} Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.306384 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3b1b3aa1-aaf2-4291-b735-00fc0ca3b455","Type":"ContainerDied","Data":"b81b5988e2d2315302f8fc368a772f491894403a69f9daf490ccf6223ec9b1e0"} Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.306436 4981 scope.go:117] "RemoveContainer" containerID="0e2c30c729c20f5fa057ed3c4f30df1fb70059bff90e303470440c08f8ca68ee" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.306881 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.310286 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" (UID: "3b1b3aa1-aaf2-4291-b735-00fc0ca3b455"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.314507 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-config-data" (OuterVolumeSpecName: "config-data") pod "3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" (UID: "3b1b3aa1-aaf2-4291-b735-00fc0ca3b455"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.317415 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" (UID: "3b1b3aa1-aaf2-4291-b735-00fc0ca3b455"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.382746 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.383010 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.383164 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pst9\" (UniqueName: \"kubernetes.io/projected/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-kube-api-access-9pst9\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.383341 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.383467 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.414434 4981 scope.go:117] "RemoveContainer" containerID="82d20e501ba1cc8b70db300d5117f005d6119aab41213d8f924cfdc5b62f3094" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.678788 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.688242 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.716396 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:22:56 crc kubenswrapper[4981]: E0128 15:22:56.722447 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" containerName="ceilometer-notification-agent" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.722500 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" containerName="ceilometer-notification-agent" Jan 28 15:22:56 crc kubenswrapper[4981]: E0128 15:22:56.722598 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" containerName="sg-core" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.722606 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" containerName="sg-core" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.723557 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" containerName="ceilometer-notification-agent" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.723594 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" containerName="sg-core" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.735772 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.739312 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.739482 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.769618 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.790464 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.790508 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-config-data\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.790598 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-scripts\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.790731 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpqnx\" (UniqueName: \"kubernetes.io/projected/def15511-2069-4c0c-88ac-b22ce637e066-kube-api-access-rpqnx\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.790752 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/def15511-2069-4c0c-88ac-b22ce637e066-run-httpd\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.790839 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/def15511-2069-4c0c-88ac-b22ce637e066-log-httpd\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.790858 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.892463 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/def15511-2069-4c0c-88ac-b22ce637e066-log-httpd\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.892502 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.892530 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.892546 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-config-data\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.892571 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-scripts\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.892635 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpqnx\" (UniqueName: \"kubernetes.io/projected/def15511-2069-4c0c-88ac-b22ce637e066-kube-api-access-rpqnx\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.892654 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/def15511-2069-4c0c-88ac-b22ce637e066-run-httpd\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.893107 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/def15511-2069-4c0c-88ac-b22ce637e066-run-httpd\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.893333 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/def15511-2069-4c0c-88ac-b22ce637e066-log-httpd\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.896784 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.897338 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.904721 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-scripts\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.912174 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-config-data\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:56 crc kubenswrapper[4981]: I0128 15:22:56.923943 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpqnx\" (UniqueName: \"kubernetes.io/projected/def15511-2069-4c0c-88ac-b22ce637e066-kube-api-access-rpqnx\") pod \"ceilometer-0\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " pod="openstack/ceilometer-0" Jan 28 15:22:57 crc kubenswrapper[4981]: I0128 15:22:57.071492 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:22:57 crc kubenswrapper[4981]: I0128 15:22:57.360961 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b1b3aa1-aaf2-4291-b735-00fc0ca3b455" path="/var/lib/kubelet/pods/3b1b3aa1-aaf2-4291-b735-00fc0ca3b455/volumes" Jan 28 15:22:57 crc kubenswrapper[4981]: W0128 15:22:57.620329 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddef15511_2069_4c0c_88ac_b22ce637e066.slice/crio-c89f10ee4cf7bd048f31816f5d2ad379edf20f2b9ff19a48edbd8322d8dfc696 WatchSource:0}: Error finding container c89f10ee4cf7bd048f31816f5d2ad379edf20f2b9ff19a48edbd8322d8dfc696: Status 404 returned error can't find the container with id c89f10ee4cf7bd048f31816f5d2ad379edf20f2b9ff19a48edbd8322d8dfc696 Jan 28 15:22:57 crc kubenswrapper[4981]: I0128 15:22:57.622808 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:22:58 crc kubenswrapper[4981]: I0128 15:22:58.330607 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"def15511-2069-4c0c-88ac-b22ce637e066","Type":"ContainerStarted","Data":"c89f10ee4cf7bd048f31816f5d2ad379edf20f2b9ff19a48edbd8322d8dfc696"} Jan 28 15:22:58 crc kubenswrapper[4981]: I0128 15:22:58.515522 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:58 crc kubenswrapper[4981]: I0128 15:22:58.534119 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-f6d8cf6db-mshgs" Jan 28 15:22:58 crc kubenswrapper[4981]: I0128 15:22:58.646444 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7ff86b68cd-spzqr"] Jan 28 15:22:58 crc kubenswrapper[4981]: I0128 15:22:58.646730 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7ff86b68cd-spzqr" podUID="696f320e-3870-452a-ae89-f9ede235d6ce" containerName="barbican-api-log" containerID="cri-o://fa1070c7ca2db5c93c46829a7521406195db19e4446552bab9c3c23a0d509335" gracePeriod=30 Jan 28 15:22:58 crc kubenswrapper[4981]: I0128 15:22:58.648131 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7ff86b68cd-spzqr" podUID="696f320e-3870-452a-ae89-f9ede235d6ce" containerName="barbican-api" containerID="cri-o://5d4c73e6896f4191210dc0ee4cc60e2574c4b39550e18b3fb690ba77550cc8ff" gracePeriod=30 Jan 28 15:22:58 crc kubenswrapper[4981]: I0128 15:22:58.879380 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:22:58 crc kubenswrapper[4981]: I0128 15:22:58.944574 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7q5wl"] Jan 28 15:22:58 crc kubenswrapper[4981]: I0128 15:22:58.944825 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" podUID="fa17a405-daf2-4bec-ab6d-8337e759165a" containerName="dnsmasq-dns" containerID="cri-o://db3e802e110f6f8ccadb679ee82b0cb801c753bff29cb8232f82cd84175ed503" gracePeriod=10 Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.346842 4981 generic.go:334] "Generic (PLEG): container finished" podID="696f320e-3870-452a-ae89-f9ede235d6ce" containerID="fa1070c7ca2db5c93c46829a7521406195db19e4446552bab9c3c23a0d509335" exitCode=143 Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.347014 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ff86b68cd-spzqr" event={"ID":"696f320e-3870-452a-ae89-f9ede235d6ce","Type":"ContainerDied","Data":"fa1070c7ca2db5c93c46829a7521406195db19e4446552bab9c3c23a0d509335"} Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.351113 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"def15511-2069-4c0c-88ac-b22ce637e066","Type":"ContainerStarted","Data":"8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f"} Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.362530 4981 generic.go:334] "Generic (PLEG): container finished" podID="fa17a405-daf2-4bec-ab6d-8337e759165a" containerID="db3e802e110f6f8ccadb679ee82b0cb801c753bff29cb8232f82cd84175ed503" exitCode=0 Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.364288 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" event={"ID":"fa17a405-daf2-4bec-ab6d-8337e759165a","Type":"ContainerDied","Data":"db3e802e110f6f8ccadb679ee82b0cb801c753bff29cb8232f82cd84175ed503"} Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.445377 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.597899 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-ovsdbserver-nb\") pod \"fa17a405-daf2-4bec-ab6d-8337e759165a\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.598005 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-config\") pod \"fa17a405-daf2-4bec-ab6d-8337e759165a\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.598046 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltltr\" (UniqueName: \"kubernetes.io/projected/fa17a405-daf2-4bec-ab6d-8337e759165a-kube-api-access-ltltr\") pod \"fa17a405-daf2-4bec-ab6d-8337e759165a\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.598074 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-dns-swift-storage-0\") pod \"fa17a405-daf2-4bec-ab6d-8337e759165a\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.598103 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-dns-svc\") pod \"fa17a405-daf2-4bec-ab6d-8337e759165a\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.598232 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-ovsdbserver-sb\") pod \"fa17a405-daf2-4bec-ab6d-8337e759165a\" (UID: \"fa17a405-daf2-4bec-ab6d-8337e759165a\") " Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.606370 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa17a405-daf2-4bec-ab6d-8337e759165a-kube-api-access-ltltr" (OuterVolumeSpecName: "kube-api-access-ltltr") pod "fa17a405-daf2-4bec-ab6d-8337e759165a" (UID: "fa17a405-daf2-4bec-ab6d-8337e759165a"). InnerVolumeSpecName "kube-api-access-ltltr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.642684 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fa17a405-daf2-4bec-ab6d-8337e759165a" (UID: "fa17a405-daf2-4bec-ab6d-8337e759165a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.648215 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa17a405-daf2-4bec-ab6d-8337e759165a" (UID: "fa17a405-daf2-4bec-ab6d-8337e759165a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.648864 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fa17a405-daf2-4bec-ab6d-8337e759165a" (UID: "fa17a405-daf2-4bec-ab6d-8337e759165a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.653119 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fa17a405-daf2-4bec-ab6d-8337e759165a" (UID: "fa17a405-daf2-4bec-ab6d-8337e759165a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.657589 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-config" (OuterVolumeSpecName: "config") pod "fa17a405-daf2-4bec-ab6d-8337e759165a" (UID: "fa17a405-daf2-4bec-ab6d-8337e759165a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.700595 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.700627 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltltr\" (UniqueName: \"kubernetes.io/projected/fa17a405-daf2-4bec-ab6d-8337e759165a-kube-api-access-ltltr\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.700638 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.700646 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.700654 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:22:59 crc kubenswrapper[4981]: I0128 15:22:59.700662 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa17a405-daf2-4bec-ab6d-8337e759165a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:00 crc kubenswrapper[4981]: I0128 15:23:00.374365 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" event={"ID":"fa17a405-daf2-4bec-ab6d-8337e759165a","Type":"ContainerDied","Data":"d8db9a110e776618d6df8eba6566ce3c3943664a763b5c49db457f8e7ccd500f"} Jan 28 15:23:00 crc kubenswrapper[4981]: I0128 15:23:00.374693 4981 scope.go:117] "RemoveContainer" containerID="db3e802e110f6f8ccadb679ee82b0cb801c753bff29cb8232f82cd84175ed503" Jan 28 15:23:00 crc kubenswrapper[4981]: I0128 15:23:00.374426 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b5c85b87-7q5wl" Jan 28 15:23:00 crc kubenswrapper[4981]: I0128 15:23:00.387988 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"def15511-2069-4c0c-88ac-b22ce637e066","Type":"ContainerStarted","Data":"c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec"} Jan 28 15:23:00 crc kubenswrapper[4981]: I0128 15:23:00.422724 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7q5wl"] Jan 28 15:23:00 crc kubenswrapper[4981]: I0128 15:23:00.432482 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b5c85b87-7q5wl"] Jan 28 15:23:00 crc kubenswrapper[4981]: I0128 15:23:00.541773 4981 scope.go:117] "RemoveContainer" containerID="590ff380f6f9505e3ce74379ea190125f952fef6562a97e1c051b441e3ce9cc6" Jan 28 15:23:01 crc kubenswrapper[4981]: I0128 15:23:01.347865 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa17a405-daf2-4bec-ab6d-8337e759165a" path="/var/lib/kubelet/pods/fa17a405-daf2-4bec-ab6d-8337e759165a/volumes" Jan 28 15:23:01 crc kubenswrapper[4981]: I0128 15:23:01.812746 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7ff86b68cd-spzqr" podUID="696f320e-3870-452a-ae89-f9ede235d6ce" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:57228->10.217.0.156:9311: read: connection reset by peer" Jan 28 15:23:01 crc kubenswrapper[4981]: I0128 15:23:01.812820 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7ff86b68cd-spzqr" podUID="696f320e-3870-452a-ae89-f9ede235d6ce" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:57216->10.217.0.156:9311: read: connection reset by peer" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.332431 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.407355 4981 generic.go:334] "Generic (PLEG): container finished" podID="696f320e-3870-452a-ae89-f9ede235d6ce" containerID="5d4c73e6896f4191210dc0ee4cc60e2574c4b39550e18b3fb690ba77550cc8ff" exitCode=0 Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.407406 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ff86b68cd-spzqr" event={"ID":"696f320e-3870-452a-ae89-f9ede235d6ce","Type":"ContainerDied","Data":"5d4c73e6896f4191210dc0ee4cc60e2574c4b39550e18b3fb690ba77550cc8ff"} Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.407470 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7ff86b68cd-spzqr" event={"ID":"696f320e-3870-452a-ae89-f9ede235d6ce","Type":"ContainerDied","Data":"472b46319f3483b52c5e3f9d9108b5fa81d6d2b52f45863ec4bfe7494af182a2"} Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.407490 4981 scope.go:117] "RemoveContainer" containerID="5d4c73e6896f4191210dc0ee4cc60e2574c4b39550e18b3fb690ba77550cc8ff" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.407423 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7ff86b68cd-spzqr" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.410782 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"def15511-2069-4c0c-88ac-b22ce637e066","Type":"ContainerStarted","Data":"23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758"} Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.427835 4981 scope.go:117] "RemoveContainer" containerID="fa1070c7ca2db5c93c46829a7521406195db19e4446552bab9c3c23a0d509335" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.453386 4981 scope.go:117] "RemoveContainer" containerID="5d4c73e6896f4191210dc0ee4cc60e2574c4b39550e18b3fb690ba77550cc8ff" Jan 28 15:23:02 crc kubenswrapper[4981]: E0128 15:23:02.456521 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d4c73e6896f4191210dc0ee4cc60e2574c4b39550e18b3fb690ba77550cc8ff\": container with ID starting with 5d4c73e6896f4191210dc0ee4cc60e2574c4b39550e18b3fb690ba77550cc8ff not found: ID does not exist" containerID="5d4c73e6896f4191210dc0ee4cc60e2574c4b39550e18b3fb690ba77550cc8ff" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.456571 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d4c73e6896f4191210dc0ee4cc60e2574c4b39550e18b3fb690ba77550cc8ff"} err="failed to get container status \"5d4c73e6896f4191210dc0ee4cc60e2574c4b39550e18b3fb690ba77550cc8ff\": rpc error: code = NotFound desc = could not find container \"5d4c73e6896f4191210dc0ee4cc60e2574c4b39550e18b3fb690ba77550cc8ff\": container with ID starting with 5d4c73e6896f4191210dc0ee4cc60e2574c4b39550e18b3fb690ba77550cc8ff not found: ID does not exist" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.456600 4981 scope.go:117] "RemoveContainer" containerID="fa1070c7ca2db5c93c46829a7521406195db19e4446552bab9c3c23a0d509335" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.456753 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-combined-ca-bundle\") pod \"696f320e-3870-452a-ae89-f9ede235d6ce\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.456845 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dbbj\" (UniqueName: \"kubernetes.io/projected/696f320e-3870-452a-ae89-f9ede235d6ce-kube-api-access-8dbbj\") pod \"696f320e-3870-452a-ae89-f9ede235d6ce\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.456904 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-config-data\") pod \"696f320e-3870-452a-ae89-f9ede235d6ce\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.456935 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/696f320e-3870-452a-ae89-f9ede235d6ce-logs\") pod \"696f320e-3870-452a-ae89-f9ede235d6ce\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.457075 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-config-data-custom\") pod \"696f320e-3870-452a-ae89-f9ede235d6ce\" (UID: \"696f320e-3870-452a-ae89-f9ede235d6ce\") " Jan 28 15:23:02 crc kubenswrapper[4981]: E0128 15:23:02.457667 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa1070c7ca2db5c93c46829a7521406195db19e4446552bab9c3c23a0d509335\": container with ID starting with fa1070c7ca2db5c93c46829a7521406195db19e4446552bab9c3c23a0d509335 not found: ID does not exist" containerID="fa1070c7ca2db5c93c46829a7521406195db19e4446552bab9c3c23a0d509335" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.457779 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa1070c7ca2db5c93c46829a7521406195db19e4446552bab9c3c23a0d509335"} err="failed to get container status \"fa1070c7ca2db5c93c46829a7521406195db19e4446552bab9c3c23a0d509335\": rpc error: code = NotFound desc = could not find container \"fa1070c7ca2db5c93c46829a7521406195db19e4446552bab9c3c23a0d509335\": container with ID starting with fa1070c7ca2db5c93c46829a7521406195db19e4446552bab9c3c23a0d509335 not found: ID does not exist" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.462508 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/696f320e-3870-452a-ae89-f9ede235d6ce-logs" (OuterVolumeSpecName: "logs") pod "696f320e-3870-452a-ae89-f9ede235d6ce" (UID: "696f320e-3870-452a-ae89-f9ede235d6ce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.462750 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/696f320e-3870-452a-ae89-f9ede235d6ce-kube-api-access-8dbbj" (OuterVolumeSpecName: "kube-api-access-8dbbj") pod "696f320e-3870-452a-ae89-f9ede235d6ce" (UID: "696f320e-3870-452a-ae89-f9ede235d6ce"). InnerVolumeSpecName "kube-api-access-8dbbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.463016 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "696f320e-3870-452a-ae89-f9ede235d6ce" (UID: "696f320e-3870-452a-ae89-f9ede235d6ce"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.482273 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "696f320e-3870-452a-ae89-f9ede235d6ce" (UID: "696f320e-3870-452a-ae89-f9ede235d6ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.501942 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-config-data" (OuterVolumeSpecName: "config-data") pod "696f320e-3870-452a-ae89-f9ede235d6ce" (UID: "696f320e-3870-452a-ae89-f9ede235d6ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.558451 4981 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.558478 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.558488 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dbbj\" (UniqueName: \"kubernetes.io/projected/696f320e-3870-452a-ae89-f9ede235d6ce-kube-api-access-8dbbj\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.558497 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/696f320e-3870-452a-ae89-f9ede235d6ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.558504 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/696f320e-3870-452a-ae89-f9ede235d6ce-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.738872 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7ff86b68cd-spzqr"] Jan 28 15:23:02 crc kubenswrapper[4981]: I0128 15:23:02.746059 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7ff86b68cd-spzqr"] Jan 28 15:23:03 crc kubenswrapper[4981]: I0128 15:23:03.339555 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="696f320e-3870-452a-ae89-f9ede235d6ce" path="/var/lib/kubelet/pods/696f320e-3870-452a-ae89-f9ede235d6ce/volumes" Jan 28 15:23:04 crc kubenswrapper[4981]: I0128 15:23:04.445421 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"def15511-2069-4c0c-88ac-b22ce637e066","Type":"ContainerStarted","Data":"599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e"} Jan 28 15:23:04 crc kubenswrapper[4981]: I0128 15:23:04.447154 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 15:23:04 crc kubenswrapper[4981]: I0128 15:23:04.485868 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.055842578 podStartE2EDuration="8.485831643s" podCreationTimestamp="2026-01-28 15:22:56 +0000 UTC" firstStartedPulling="2026-01-28 15:22:57.623629311 +0000 UTC m=+1189.075787552" lastFinishedPulling="2026-01-28 15:23:04.053618336 +0000 UTC m=+1195.505776617" observedRunningTime="2026-01-28 15:23:04.477157578 +0000 UTC m=+1195.929315849" watchObservedRunningTime="2026-01-28 15:23:04.485831643 +0000 UTC m=+1195.937989924" Jan 28 15:23:08 crc kubenswrapper[4981]: I0128 15:23:08.888335 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-569ff6748d-zhgp9" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.093958 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 28 15:23:10 crc kubenswrapper[4981]: E0128 15:23:10.094391 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa17a405-daf2-4bec-ab6d-8337e759165a" containerName="dnsmasq-dns" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.094408 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa17a405-daf2-4bec-ab6d-8337e759165a" containerName="dnsmasq-dns" Jan 28 15:23:10 crc kubenswrapper[4981]: E0128 15:23:10.094422 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696f320e-3870-452a-ae89-f9ede235d6ce" containerName="barbican-api" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.094431 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="696f320e-3870-452a-ae89-f9ede235d6ce" containerName="barbican-api" Jan 28 15:23:10 crc kubenswrapper[4981]: E0128 15:23:10.094445 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa17a405-daf2-4bec-ab6d-8337e759165a" containerName="init" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.094455 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa17a405-daf2-4bec-ab6d-8337e759165a" containerName="init" Jan 28 15:23:10 crc kubenswrapper[4981]: E0128 15:23:10.094479 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="696f320e-3870-452a-ae89-f9ede235d6ce" containerName="barbican-api-log" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.094487 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="696f320e-3870-452a-ae89-f9ede235d6ce" containerName="barbican-api-log" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.094702 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa17a405-daf2-4bec-ab6d-8337e759165a" containerName="dnsmasq-dns" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.094736 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="696f320e-3870-452a-ae89-f9ede235d6ce" containerName="barbican-api" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.094766 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="696f320e-3870-452a-ae89-f9ede235d6ce" containerName="barbican-api-log" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.095627 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.101906 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.102143 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.103668 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-89rrj" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.111972 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.210149 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6-openstack-config\") pod \"openstackclient\" (UID: \"5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6\") " pod="openstack/openstackclient" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.210267 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8hqs\" (UniqueName: \"kubernetes.io/projected/5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6-kube-api-access-c8hqs\") pod \"openstackclient\" (UID: \"5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6\") " pod="openstack/openstackclient" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.210302 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6-openstack-config-secret\") pod \"openstackclient\" (UID: \"5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6\") " pod="openstack/openstackclient" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.210373 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6\") " pod="openstack/openstackclient" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.311654 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8hqs\" (UniqueName: \"kubernetes.io/projected/5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6-kube-api-access-c8hqs\") pod \"openstackclient\" (UID: \"5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6\") " pod="openstack/openstackclient" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.311701 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6-openstack-config-secret\") pod \"openstackclient\" (UID: \"5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6\") " pod="openstack/openstackclient" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.311788 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6\") " pod="openstack/openstackclient" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.312567 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6-openstack-config\") pod \"openstackclient\" (UID: \"5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6\") " pod="openstack/openstackclient" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.313169 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6-openstack-config\") pod \"openstackclient\" (UID: \"5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6\") " pod="openstack/openstackclient" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.323163 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6-openstack-config-secret\") pod \"openstackclient\" (UID: \"5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6\") " pod="openstack/openstackclient" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.324309 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6\") " pod="openstack/openstackclient" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.328425 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8hqs\" (UniqueName: \"kubernetes.io/projected/5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6-kube-api-access-c8hqs\") pod \"openstackclient\" (UID: \"5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6\") " pod="openstack/openstackclient" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.418456 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 15:23:10 crc kubenswrapper[4981]: I0128 15:23:10.897922 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 15:23:11 crc kubenswrapper[4981]: I0128 15:23:11.523721 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6","Type":"ContainerStarted","Data":"31522099f1af31d7bbf7c79d75b0d1a1d59cc037a7070c2ebfacd3a9fdcc94ee"} Jan 28 15:23:16 crc kubenswrapper[4981]: I0128 15:23:16.581015 4981 generic.go:334] "Generic (PLEG): container finished" podID="7722b5f2-e226-483f-9ae3-d2b5a9e5a605" containerID="75e50301a82d3a1906792190c6d192eff716c7b2d6bd903666385be3125b6794" exitCode=0 Jan 28 15:23:16 crc kubenswrapper[4981]: I0128 15:23:16.581098 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t6ht4" event={"ID":"7722b5f2-e226-483f-9ae3-d2b5a9e5a605","Type":"ContainerDied","Data":"75e50301a82d3a1906792190c6d192eff716c7b2d6bd903666385be3125b6794"} Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.459564 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-d79b67667-4jvlp"] Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.461444 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.464308 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.465286 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.465820 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.483363 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-d79b67667-4jvlp"] Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.580700 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-config-data\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.580818 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-etc-swift\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.580888 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-public-tls-certs\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.580965 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-combined-ca-bundle\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.580995 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt2v5\" (UniqueName: \"kubernetes.io/projected/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-kube-api-access-gt2v5\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.581016 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-internal-tls-certs\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.581121 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-log-httpd\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.581182 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-run-httpd\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.682880 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-run-httpd\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.683172 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-config-data\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.683254 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-etc-swift\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.683331 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-public-tls-certs\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.683379 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-combined-ca-bundle\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.683417 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt2v5\" (UniqueName: \"kubernetes.io/projected/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-kube-api-access-gt2v5\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.683456 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-internal-tls-certs\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.683377 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-run-httpd\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.683526 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-log-httpd\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.685585 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-log-httpd\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.690321 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-config-data\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.691064 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-public-tls-certs\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.692792 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-internal-tls-certs\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.693065 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-etc-swift\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.693496 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-combined-ca-bundle\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.703671 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt2v5\" (UniqueName: \"kubernetes.io/projected/f3854c5d-2ac4-48d0-96df-a96b2fa5feb7-kube-api-access-gt2v5\") pod \"swift-proxy-d79b67667-4jvlp\" (UID: \"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7\") " pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:18 crc kubenswrapper[4981]: I0128 15:23:18.783689 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:19 crc kubenswrapper[4981]: I0128 15:23:19.082775 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:23:19 crc kubenswrapper[4981]: I0128 15:23:19.083071 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="ceilometer-central-agent" containerID="cri-o://8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f" gracePeriod=30 Jan 28 15:23:19 crc kubenswrapper[4981]: I0128 15:23:19.083136 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="sg-core" containerID="cri-o://23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758" gracePeriod=30 Jan 28 15:23:19 crc kubenswrapper[4981]: I0128 15:23:19.083269 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="proxy-httpd" containerID="cri-o://599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e" gracePeriod=30 Jan 28 15:23:19 crc kubenswrapper[4981]: I0128 15:23:19.083251 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="ceilometer-notification-agent" containerID="cri-o://c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec" gracePeriod=30 Jan 28 15:23:19 crc kubenswrapper[4981]: I0128 15:23:19.091638 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.158:3000/\": EOF" Jan 28 15:23:19 crc kubenswrapper[4981]: I0128 15:23:19.611675 4981 generic.go:334] "Generic (PLEG): container finished" podID="def15511-2069-4c0c-88ac-b22ce637e066" containerID="23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758" exitCode=2 Jan 28 15:23:19 crc kubenswrapper[4981]: I0128 15:23:19.611784 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"def15511-2069-4c0c-88ac-b22ce637e066","Type":"ContainerDied","Data":"23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758"} Jan 28 15:23:19 crc kubenswrapper[4981]: I0128 15:23:19.897591 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:23:19 crc kubenswrapper[4981]: I0128 15:23:19.897648 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.189701 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t6ht4" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.235172 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zs4hx\" (UniqueName: \"kubernetes.io/projected/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-kube-api-access-zs4hx\") pod \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.235297 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-scripts\") pod \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.235484 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-config-data\") pod \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.235528 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-combined-ca-bundle\") pod \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.235698 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-logs\") pod \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\" (UID: \"7722b5f2-e226-483f-9ae3-d2b5a9e5a605\") " Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.237121 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-logs" (OuterVolumeSpecName: "logs") pod "7722b5f2-e226-483f-9ae3-d2b5a9e5a605" (UID: "7722b5f2-e226-483f-9ae3-d2b5a9e5a605"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.243556 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-scripts" (OuterVolumeSpecName: "scripts") pod "7722b5f2-e226-483f-9ae3-d2b5a9e5a605" (UID: "7722b5f2-e226-483f-9ae3-d2b5a9e5a605"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.257739 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-kube-api-access-zs4hx" (OuterVolumeSpecName: "kube-api-access-zs4hx") pod "7722b5f2-e226-483f-9ae3-d2b5a9e5a605" (UID: "7722b5f2-e226-483f-9ae3-d2b5a9e5a605"). InnerVolumeSpecName "kube-api-access-zs4hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.313399 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-config-data" (OuterVolumeSpecName: "config-data") pod "7722b5f2-e226-483f-9ae3-d2b5a9e5a605" (UID: "7722b5f2-e226-483f-9ae3-d2b5a9e5a605"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.323712 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7722b5f2-e226-483f-9ae3-d2b5a9e5a605" (UID: "7722b5f2-e226-483f-9ae3-d2b5a9e5a605"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.341820 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.341849 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.341861 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zs4hx\" (UniqueName: \"kubernetes.io/projected/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-kube-api-access-zs4hx\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.341875 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.341885 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7722b5f2-e226-483f-9ae3-d2b5a9e5a605-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.401617 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.442667 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-combined-ca-bundle\") pod \"def15511-2069-4c0c-88ac-b22ce637e066\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.442764 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-config-data\") pod \"def15511-2069-4c0c-88ac-b22ce637e066\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.442827 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/def15511-2069-4c0c-88ac-b22ce637e066-run-httpd\") pod \"def15511-2069-4c0c-88ac-b22ce637e066\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.442921 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/def15511-2069-4c0c-88ac-b22ce637e066-log-httpd\") pod \"def15511-2069-4c0c-88ac-b22ce637e066\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.443011 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-sg-core-conf-yaml\") pod \"def15511-2069-4c0c-88ac-b22ce637e066\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.443090 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpqnx\" (UniqueName: \"kubernetes.io/projected/def15511-2069-4c0c-88ac-b22ce637e066-kube-api-access-rpqnx\") pod \"def15511-2069-4c0c-88ac-b22ce637e066\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.443218 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-scripts\") pod \"def15511-2069-4c0c-88ac-b22ce637e066\" (UID: \"def15511-2069-4c0c-88ac-b22ce637e066\") " Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.443291 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/def15511-2069-4c0c-88ac-b22ce637e066-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "def15511-2069-4c0c-88ac-b22ce637e066" (UID: "def15511-2069-4c0c-88ac-b22ce637e066"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.443703 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/def15511-2069-4c0c-88ac-b22ce637e066-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.443844 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/def15511-2069-4c0c-88ac-b22ce637e066-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "def15511-2069-4c0c-88ac-b22ce637e066" (UID: "def15511-2069-4c0c-88ac-b22ce637e066"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.446359 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/def15511-2069-4c0c-88ac-b22ce637e066-kube-api-access-rpqnx" (OuterVolumeSpecName: "kube-api-access-rpqnx") pod "def15511-2069-4c0c-88ac-b22ce637e066" (UID: "def15511-2069-4c0c-88ac-b22ce637e066"). InnerVolumeSpecName "kube-api-access-rpqnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.448345 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-scripts" (OuterVolumeSpecName: "scripts") pod "def15511-2069-4c0c-88ac-b22ce637e066" (UID: "def15511-2069-4c0c-88ac-b22ce637e066"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.477817 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "def15511-2069-4c0c-88ac-b22ce637e066" (UID: "def15511-2069-4c0c-88ac-b22ce637e066"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.539100 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "def15511-2069-4c0c-88ac-b22ce637e066" (UID: "def15511-2069-4c0c-88ac-b22ce637e066"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.544595 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/def15511-2069-4c0c-88ac-b22ce637e066-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.544623 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.544632 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpqnx\" (UniqueName: \"kubernetes.io/projected/def15511-2069-4c0c-88ac-b22ce637e066-kube-api-access-rpqnx\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.544642 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.544654 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.565875 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-config-data" (OuterVolumeSpecName: "config-data") pod "def15511-2069-4c0c-88ac-b22ce637e066" (UID: "def15511-2069-4c0c-88ac-b22ce637e066"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.629401 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-t6ht4" event={"ID":"7722b5f2-e226-483f-9ae3-d2b5a9e5a605","Type":"ContainerDied","Data":"b86c3dd34b8aed9dc2c11ff2dc07d11e1c58dd47c9bd07ae529a17a2474d316f"} Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.629464 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-t6ht4" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.629487 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b86c3dd34b8aed9dc2c11ff2dc07d11e1c58dd47c9bd07ae529a17a2474d316f" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.631771 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6","Type":"ContainerStarted","Data":"45ab7bd8df32be6a050db1c4654be81b37c763be76825aaa71e1b45a396079b8"} Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.635471 4981 generic.go:334] "Generic (PLEG): container finished" podID="def15511-2069-4c0c-88ac-b22ce637e066" containerID="599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e" exitCode=0 Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.635501 4981 generic.go:334] "Generic (PLEG): container finished" podID="def15511-2069-4c0c-88ac-b22ce637e066" containerID="c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec" exitCode=0 Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.635513 4981 generic.go:334] "Generic (PLEG): container finished" podID="def15511-2069-4c0c-88ac-b22ce637e066" containerID="8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f" exitCode=0 Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.635529 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"def15511-2069-4c0c-88ac-b22ce637e066","Type":"ContainerDied","Data":"599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e"} Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.635554 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"def15511-2069-4c0c-88ac-b22ce637e066","Type":"ContainerDied","Data":"c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec"} Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.635565 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"def15511-2069-4c0c-88ac-b22ce637e066","Type":"ContainerDied","Data":"8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f"} Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.635574 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"def15511-2069-4c0c-88ac-b22ce637e066","Type":"ContainerDied","Data":"c89f10ee4cf7bd048f31816f5d2ad379edf20f2b9ff19a48edbd8322d8dfc696"} Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.635589 4981 scope.go:117] "RemoveContainer" containerID="599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.640461 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.647516 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/def15511-2069-4c0c-88ac-b22ce637e066-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.649213 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.191391036 podStartE2EDuration="11.649195288s" podCreationTimestamp="2026-01-28 15:23:10 +0000 UTC" firstStartedPulling="2026-01-28 15:23:10.906721162 +0000 UTC m=+1202.358879413" lastFinishedPulling="2026-01-28 15:23:21.364525424 +0000 UTC m=+1212.816683665" observedRunningTime="2026-01-28 15:23:21.64735805 +0000 UTC m=+1213.099516291" watchObservedRunningTime="2026-01-28 15:23:21.649195288 +0000 UTC m=+1213.101353529" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.678024 4981 scope.go:117] "RemoveContainer" containerID="23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.710038 4981 scope.go:117] "RemoveContainer" containerID="c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.720420 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.732148 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.740071 4981 scope.go:117] "RemoveContainer" containerID="8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.741836 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:23:21 crc kubenswrapper[4981]: E0128 15:23:21.742276 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="sg-core" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.742294 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="sg-core" Jan 28 15:23:21 crc kubenswrapper[4981]: E0128 15:23:21.742308 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="ceilometer-central-agent" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.742314 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="ceilometer-central-agent" Jan 28 15:23:21 crc kubenswrapper[4981]: E0128 15:23:21.742322 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="proxy-httpd" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.742328 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="proxy-httpd" Jan 28 15:23:21 crc kubenswrapper[4981]: E0128 15:23:21.742348 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="ceilometer-notification-agent" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.742353 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="ceilometer-notification-agent" Jan 28 15:23:21 crc kubenswrapper[4981]: E0128 15:23:21.742367 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7722b5f2-e226-483f-9ae3-d2b5a9e5a605" containerName="placement-db-sync" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.742375 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7722b5f2-e226-483f-9ae3-d2b5a9e5a605" containerName="placement-db-sync" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.742532 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="sg-core" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.742546 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="ceilometer-notification-agent" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.742562 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="ceilometer-central-agent" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.742571 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="def15511-2069-4c0c-88ac-b22ce637e066" containerName="proxy-httpd" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.742588 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="7722b5f2-e226-483f-9ae3-d2b5a9e5a605" containerName="placement-db-sync" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.744593 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.746725 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.749157 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.760097 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.763094 4981 scope.go:117] "RemoveContainer" containerID="599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e" Jan 28 15:23:21 crc kubenswrapper[4981]: E0128 15:23:21.765961 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e\": container with ID starting with 599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e not found: ID does not exist" containerID="599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.766072 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e"} err="failed to get container status \"599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e\": rpc error: code = NotFound desc = could not find container \"599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e\": container with ID starting with 599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e not found: ID does not exist" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.766143 4981 scope.go:117] "RemoveContainer" containerID="23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758" Jan 28 15:23:21 crc kubenswrapper[4981]: E0128 15:23:21.768643 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758\": container with ID starting with 23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758 not found: ID does not exist" containerID="23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.768773 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758"} err="failed to get container status \"23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758\": rpc error: code = NotFound desc = could not find container \"23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758\": container with ID starting with 23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758 not found: ID does not exist" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.768856 4981 scope.go:117] "RemoveContainer" containerID="c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec" Jan 28 15:23:21 crc kubenswrapper[4981]: E0128 15:23:21.769148 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec\": container with ID starting with c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec not found: ID does not exist" containerID="c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.769280 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec"} err="failed to get container status \"c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec\": rpc error: code = NotFound desc = could not find container \"c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec\": container with ID starting with c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec not found: ID does not exist" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.769485 4981 scope.go:117] "RemoveContainer" containerID="8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f" Jan 28 15:23:21 crc kubenswrapper[4981]: E0128 15:23:21.769755 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f\": container with ID starting with 8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f not found: ID does not exist" containerID="8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.769826 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f"} err="failed to get container status \"8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f\": rpc error: code = NotFound desc = could not find container \"8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f\": container with ID starting with 8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f not found: ID does not exist" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.769894 4981 scope.go:117] "RemoveContainer" containerID="599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.770140 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e"} err="failed to get container status \"599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e\": rpc error: code = NotFound desc = could not find container \"599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e\": container with ID starting with 599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e not found: ID does not exist" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.770260 4981 scope.go:117] "RemoveContainer" containerID="23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.770716 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758"} err="failed to get container status \"23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758\": rpc error: code = NotFound desc = could not find container \"23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758\": container with ID starting with 23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758 not found: ID does not exist" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.770755 4981 scope.go:117] "RemoveContainer" containerID="c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.771037 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec"} err="failed to get container status \"c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec\": rpc error: code = NotFound desc = could not find container \"c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec\": container with ID starting with c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec not found: ID does not exist" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.771136 4981 scope.go:117] "RemoveContainer" containerID="8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.771420 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f"} err="failed to get container status \"8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f\": rpc error: code = NotFound desc = could not find container \"8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f\": container with ID starting with 8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f not found: ID does not exist" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.771441 4981 scope.go:117] "RemoveContainer" containerID="599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.771633 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e"} err="failed to get container status \"599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e\": rpc error: code = NotFound desc = could not find container \"599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e\": container with ID starting with 599cbe28d7b572205ae33ddf1c4299b7ab8e7ef23b92f4f4483f65f4bcc90e1e not found: ID does not exist" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.771652 4981 scope.go:117] "RemoveContainer" containerID="23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.771859 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758"} err="failed to get container status \"23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758\": rpc error: code = NotFound desc = could not find container \"23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758\": container with ID starting with 23484249837b44b834ca576dc1083fff9242da70a3c7d6732df8e8b1504c2758 not found: ID does not exist" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.771929 4981 scope.go:117] "RemoveContainer" containerID="c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.772225 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec"} err="failed to get container status \"c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec\": rpc error: code = NotFound desc = could not find container \"c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec\": container with ID starting with c0bdf60a9ff0c8b0ad46698bf326d0bc9f8eb93a449bfe64e832d326ace482ec not found: ID does not exist" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.772314 4981 scope.go:117] "RemoveContainer" containerID="8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.772559 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f"} err="failed to get container status \"8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f\": rpc error: code = NotFound desc = could not find container \"8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f\": container with ID starting with 8963f3629ba11ba5a676b5742b157df7ebcc1097d8cd553bf6c4b42ab35f991f not found: ID does not exist" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.850747 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2njx6\" (UniqueName: \"kubernetes.io/projected/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-kube-api-access-2njx6\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.850820 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.850879 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-log-httpd\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.850903 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-scripts\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.850933 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.850988 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-run-httpd\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.851023 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-config-data\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.952300 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2njx6\" (UniqueName: \"kubernetes.io/projected/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-kube-api-access-2njx6\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.952377 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.952426 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-log-httpd\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.952446 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-scripts\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.952469 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.952512 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-run-httpd\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.952541 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-config-data\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.953077 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-run-httpd\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.953172 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-log-httpd\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.956155 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-scripts\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.956251 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.956743 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.958126 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-config-data\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:21 crc kubenswrapper[4981]: I0128 15:23:21.970966 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2njx6\" (UniqueName: \"kubernetes.io/projected/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-kube-api-access-2njx6\") pod \"ceilometer-0\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " pod="openstack/ceilometer-0" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.061136 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.285801 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-77d87cc6cd-znvvw"] Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.291314 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.295347 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-r5qwq" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.295465 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.295511 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.295351 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.295728 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.298372 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-77d87cc6cd-znvvw"] Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.361838 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqdrf\" (UniqueName: \"kubernetes.io/projected/7e60cad3-42b0-4a56-be02-e4433ea5585f-kube-api-access-rqdrf\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.361975 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e60cad3-42b0-4a56-be02-e4433ea5585f-combined-ca-bundle\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.362019 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e60cad3-42b0-4a56-be02-e4433ea5585f-logs\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.362068 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e60cad3-42b0-4a56-be02-e4433ea5585f-scripts\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.362137 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e60cad3-42b0-4a56-be02-e4433ea5585f-internal-tls-certs\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.362163 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e60cad3-42b0-4a56-be02-e4433ea5585f-config-data\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.362210 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e60cad3-42b0-4a56-be02-e4433ea5585f-public-tls-certs\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.463961 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e60cad3-42b0-4a56-be02-e4433ea5585f-combined-ca-bundle\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.464340 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e60cad3-42b0-4a56-be02-e4433ea5585f-logs\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.464393 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e60cad3-42b0-4a56-be02-e4433ea5585f-scripts\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.464458 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e60cad3-42b0-4a56-be02-e4433ea5585f-internal-tls-certs\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.464486 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e60cad3-42b0-4a56-be02-e4433ea5585f-config-data\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.464520 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e60cad3-42b0-4a56-be02-e4433ea5585f-public-tls-certs\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.464548 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqdrf\" (UniqueName: \"kubernetes.io/projected/7e60cad3-42b0-4a56-be02-e4433ea5585f-kube-api-access-rqdrf\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.465334 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7e60cad3-42b0-4a56-be02-e4433ea5585f-logs\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.469996 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e60cad3-42b0-4a56-be02-e4433ea5585f-scripts\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.470439 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e60cad3-42b0-4a56-be02-e4433ea5585f-internal-tls-certs\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.470857 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e60cad3-42b0-4a56-be02-e4433ea5585f-combined-ca-bundle\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.475000 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e60cad3-42b0-4a56-be02-e4433ea5585f-config-data\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.482265 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e60cad3-42b0-4a56-be02-e4433ea5585f-public-tls-certs\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.485676 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqdrf\" (UniqueName: \"kubernetes.io/projected/7e60cad3-42b0-4a56-be02-e4433ea5585f-kube-api-access-rqdrf\") pod \"placement-77d87cc6cd-znvvw\" (UID: \"7e60cad3-42b0-4a56-be02-e4433ea5585f\") " pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.518710 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:23:22 crc kubenswrapper[4981]: W0128 15:23:22.522130 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ae3fd27_af3a_4dc4_b377_e3fcc4ccca46.slice/crio-3268877eb2f603f41a22fd9b1a4ce027936784ecb1dbcaddcaeb23d7985c2d39 WatchSource:0}: Error finding container 3268877eb2f603f41a22fd9b1a4ce027936784ecb1dbcaddcaeb23d7985c2d39: Status 404 returned error can't find the container with id 3268877eb2f603f41a22fd9b1a4ce027936784ecb1dbcaddcaeb23d7985c2d39 Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.614225 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.647235 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46","Type":"ContainerStarted","Data":"3268877eb2f603f41a22fd9b1a4ce027936784ecb1dbcaddcaeb23d7985c2d39"} Jan 28 15:23:22 crc kubenswrapper[4981]: I0128 15:23:22.677841 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-d79b67667-4jvlp"] Jan 28 15:23:22 crc kubenswrapper[4981]: W0128 15:23:22.688075 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3854c5d_2ac4_48d0_96df_a96b2fa5feb7.slice/crio-e3bf1e77be4121df87a1708fd5818d5c7559515d9326391367e8fbd0d346a68c WatchSource:0}: Error finding container e3bf1e77be4121df87a1708fd5818d5c7559515d9326391367e8fbd0d346a68c: Status 404 returned error can't find the container with id e3bf1e77be4121df87a1708fd5818d5c7559515d9326391367e8fbd0d346a68c Jan 28 15:23:23 crc kubenswrapper[4981]: I0128 15:23:23.059378 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 15:23:23 crc kubenswrapper[4981]: I0128 15:23:23.060002 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ba373618-9613-48d3-9023-ce519f54fb7f" containerName="glance-log" containerID="cri-o://d810d5ee9d29efe6acb870a3e3f5e1310ce05c5bd1e8da15cfd060324fb7a8f8" gracePeriod=30 Jan 28 15:23:23 crc kubenswrapper[4981]: I0128 15:23:23.060523 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ba373618-9613-48d3-9023-ce519f54fb7f" containerName="glance-httpd" containerID="cri-o://470893bbe07f2153e5d01594057a84bf22009bbbfdb4c5274c8239c021c7c3e8" gracePeriod=30 Jan 28 15:23:23 crc kubenswrapper[4981]: I0128 15:23:23.067014 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-77d87cc6cd-znvvw"] Jan 28 15:23:23 crc kubenswrapper[4981]: I0128 15:23:23.336267 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="def15511-2069-4c0c-88ac-b22ce637e066" path="/var/lib/kubelet/pods/def15511-2069-4c0c-88ac-b22ce637e066/volumes" Jan 28 15:23:23 crc kubenswrapper[4981]: I0128 15:23:23.662316 4981 generic.go:334] "Generic (PLEG): container finished" podID="ba373618-9613-48d3-9023-ce519f54fb7f" containerID="d810d5ee9d29efe6acb870a3e3f5e1310ce05c5bd1e8da15cfd060324fb7a8f8" exitCode=143 Jan 28 15:23:23 crc kubenswrapper[4981]: I0128 15:23:23.662352 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba373618-9613-48d3-9023-ce519f54fb7f","Type":"ContainerDied","Data":"d810d5ee9d29efe6acb870a3e3f5e1310ce05c5bd1e8da15cfd060324fb7a8f8"} Jan 28 15:23:23 crc kubenswrapper[4981]: I0128 15:23:23.664489 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d79b67667-4jvlp" event={"ID":"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7","Type":"ContainerStarted","Data":"fa095602911d1c0d2fb823f0abd9d090e84c84e9028e7066d62a9b0caa92d647"} Jan 28 15:23:23 crc kubenswrapper[4981]: I0128 15:23:23.664520 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d79b67667-4jvlp" event={"ID":"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7","Type":"ContainerStarted","Data":"6fe01be5d3a137e4f3666a2c68ee2de1b89da98e7f5219a15e3204822b5976f3"} Jan 28 15:23:23 crc kubenswrapper[4981]: I0128 15:23:23.664534 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-d79b67667-4jvlp" event={"ID":"f3854c5d-2ac4-48d0-96df-a96b2fa5feb7","Type":"ContainerStarted","Data":"e3bf1e77be4121df87a1708fd5818d5c7559515d9326391367e8fbd0d346a68c"} Jan 28 15:23:23 crc kubenswrapper[4981]: I0128 15:23:23.664975 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:23 crc kubenswrapper[4981]: I0128 15:23:23.665168 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:23 crc kubenswrapper[4981]: I0128 15:23:23.667861 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77d87cc6cd-znvvw" event={"ID":"7e60cad3-42b0-4a56-be02-e4433ea5585f","Type":"ContainerStarted","Data":"ada6fb37b17d7886ee660f86ec5d85c17b8b88cad977ab0dd02883ffc7a4523d"} Jan 28 15:23:23 crc kubenswrapper[4981]: I0128 15:23:23.667909 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77d87cc6cd-znvvw" event={"ID":"7e60cad3-42b0-4a56-be02-e4433ea5585f","Type":"ContainerStarted","Data":"594cf18e51c4749d686d4ed13bee77093b15ed71bef165c55bde7013b440baf4"} Jan 28 15:23:23 crc kubenswrapper[4981]: I0128 15:23:23.688796 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-d79b67667-4jvlp" podStartSLOduration=5.6887801289999995 podStartE2EDuration="5.688780129s" podCreationTimestamp="2026-01-28 15:23:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:23.687229679 +0000 UTC m=+1215.139387930" watchObservedRunningTime="2026-01-28 15:23:23.688780129 +0000 UTC m=+1215.140938380" Jan 28 15:23:24 crc kubenswrapper[4981]: I0128 15:23:24.704447 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46","Type":"ContainerStarted","Data":"38f60aa5b6b381aecfa0426a25c09e04cf54a8863492ccfb1fe5dd9fc529cf0d"} Jan 28 15:23:24 crc kubenswrapper[4981]: I0128 15:23:24.707516 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-77d87cc6cd-znvvw" event={"ID":"7e60cad3-42b0-4a56-be02-e4433ea5585f","Type":"ContainerStarted","Data":"11d6d1463fa605818e60aa8766d550292914b8c11ad35c275fb527d23607b150"} Jan 28 15:23:24 crc kubenswrapper[4981]: I0128 15:23:24.739403 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-77d87cc6cd-znvvw" podStartSLOduration=2.739382419 podStartE2EDuration="2.739382419s" podCreationTimestamp="2026-01-28 15:23:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:24.732232753 +0000 UTC m=+1216.184391014" watchObservedRunningTime="2026-01-28 15:23:24.739382419 +0000 UTC m=+1216.191540660" Jan 28 15:23:25 crc kubenswrapper[4981]: I0128 15:23:25.716426 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:25 crc kubenswrapper[4981]: I0128 15:23:25.716785 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:26 crc kubenswrapper[4981]: I0128 15:23:26.728874 4981 generic.go:334] "Generic (PLEG): container finished" podID="ba373618-9613-48d3-9023-ce519f54fb7f" containerID="470893bbe07f2153e5d01594057a84bf22009bbbfdb4c5274c8239c021c7c3e8" exitCode=0 Jan 28 15:23:26 crc kubenswrapper[4981]: I0128 15:23:26.728982 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba373618-9613-48d3-9023-ce519f54fb7f","Type":"ContainerDied","Data":"470893bbe07f2153e5d01594057a84bf22009bbbfdb4c5274c8239c021c7c3e8"} Jan 28 15:23:26 crc kubenswrapper[4981]: I0128 15:23:26.733674 4981 generic.go:334] "Generic (PLEG): container finished" podID="465c6840-8900-4520-b80a-aab52f45c173" containerID="0efb8f5d9806b6b93762b01f054f77ce9c400ef76a1e729a9449091684c8c2bc" exitCode=137 Jan 28 15:23:26 crc kubenswrapper[4981]: I0128 15:23:26.733731 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54f45b9c5b-drxcg" event={"ID":"465c6840-8900-4520-b80a-aab52f45c173","Type":"ContainerDied","Data":"0efb8f5d9806b6b93762b01f054f77ce9c400ef76a1e729a9449091684c8c2bc"} Jan 28 15:23:28 crc kubenswrapper[4981]: I0128 15:23:28.791010 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:28 crc kubenswrapper[4981]: I0128 15:23:28.792608 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-d79b67667-4jvlp" Jan 28 15:23:29 crc kubenswrapper[4981]: I0128 15:23:29.425071 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 15:23:29 crc kubenswrapper[4981]: I0128 15:23:29.425365 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" containerName="glance-log" containerID="cri-o://bebd08ae06e0b4910da4cd505ee93d126322a0a659d8d14bc8e0164e6c8cbccc" gracePeriod=30 Jan 28 15:23:29 crc kubenswrapper[4981]: I0128 15:23:29.425426 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" containerName="glance-httpd" containerID="cri-o://d460fc23afba8c3d14a8a7e4e28bbd2cfcf4a79c8689e2e7ef1614c95984f872" gracePeriod=30 Jan 28 15:23:29 crc kubenswrapper[4981]: I0128 15:23:29.784697 4981 generic.go:334] "Generic (PLEG): container finished" podID="0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" containerID="bebd08ae06e0b4910da4cd505ee93d126322a0a659d8d14bc8e0164e6c8cbccc" exitCode=143 Jan 28 15:23:29 crc kubenswrapper[4981]: I0128 15:23:29.784946 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf","Type":"ContainerDied","Data":"bebd08ae06e0b4910da4cd505ee93d126322a0a659d8d14bc8e0164e6c8cbccc"} Jan 28 15:23:29 crc kubenswrapper[4981]: I0128 15:23:29.909622 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:23:29 crc kubenswrapper[4981]: I0128 15:23:29.917240 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.004954 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-horizon-secret-key\") pod \"465c6840-8900-4520-b80a-aab52f45c173\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.005105 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/465c6840-8900-4520-b80a-aab52f45c173-scripts\") pod \"465c6840-8900-4520-b80a-aab52f45c173\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.005229 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8vl8\" (UniqueName: \"kubernetes.io/projected/465c6840-8900-4520-b80a-aab52f45c173-kube-api-access-x8vl8\") pod \"465c6840-8900-4520-b80a-aab52f45c173\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.005261 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-horizon-tls-certs\") pod \"465c6840-8900-4520-b80a-aab52f45c173\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.005347 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465c6840-8900-4520-b80a-aab52f45c173-logs\") pod \"465c6840-8900-4520-b80a-aab52f45c173\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.005385 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/465c6840-8900-4520-b80a-aab52f45c173-config-data\") pod \"465c6840-8900-4520-b80a-aab52f45c173\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.005449 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-combined-ca-bundle\") pod \"465c6840-8900-4520-b80a-aab52f45c173\" (UID: \"465c6840-8900-4520-b80a-aab52f45c173\") " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.015363 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/465c6840-8900-4520-b80a-aab52f45c173-logs" (OuterVolumeSpecName: "logs") pod "465c6840-8900-4520-b80a-aab52f45c173" (UID: "465c6840-8900-4520-b80a-aab52f45c173"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.033938 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "465c6840-8900-4520-b80a-aab52f45c173" (UID: "465c6840-8900-4520-b80a-aab52f45c173"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.037835 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/465c6840-8900-4520-b80a-aab52f45c173-kube-api-access-x8vl8" (OuterVolumeSpecName: "kube-api-access-x8vl8") pod "465c6840-8900-4520-b80a-aab52f45c173" (UID: "465c6840-8900-4520-b80a-aab52f45c173"). InnerVolumeSpecName "kube-api-access-x8vl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.041474 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/465c6840-8900-4520-b80a-aab52f45c173-scripts" (OuterVolumeSpecName: "scripts") pod "465c6840-8900-4520-b80a-aab52f45c173" (UID: "465c6840-8900-4520-b80a-aab52f45c173"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.065930 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/465c6840-8900-4520-b80a-aab52f45c173-config-data" (OuterVolumeSpecName: "config-data") pod "465c6840-8900-4520-b80a-aab52f45c173" (UID: "465c6840-8900-4520-b80a-aab52f45c173"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.081448 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "465c6840-8900-4520-b80a-aab52f45c173" (UID: "465c6840-8900-4520-b80a-aab52f45c173"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.082279 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "465c6840-8900-4520-b80a-aab52f45c173" (UID: "465c6840-8900-4520-b80a-aab52f45c173"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.106834 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba373618-9613-48d3-9023-ce519f54fb7f-logs\") pod \"ba373618-9613-48d3-9023-ce519f54fb7f\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.106876 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97p98\" (UniqueName: \"kubernetes.io/projected/ba373618-9613-48d3-9023-ce519f54fb7f-kube-api-access-97p98\") pod \"ba373618-9613-48d3-9023-ce519f54fb7f\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.106901 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-combined-ca-bundle\") pod \"ba373618-9613-48d3-9023-ce519f54fb7f\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.107007 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ba373618-9613-48d3-9023-ce519f54fb7f\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.107046 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-scripts\") pod \"ba373618-9613-48d3-9023-ce519f54fb7f\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.107096 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba373618-9613-48d3-9023-ce519f54fb7f-httpd-run\") pod \"ba373618-9613-48d3-9023-ce519f54fb7f\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.107120 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-public-tls-certs\") pod \"ba373618-9613-48d3-9023-ce519f54fb7f\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.107147 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-config-data\") pod \"ba373618-9613-48d3-9023-ce519f54fb7f\" (UID: \"ba373618-9613-48d3-9023-ce519f54fb7f\") " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.107369 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba373618-9613-48d3-9023-ce519f54fb7f-logs" (OuterVolumeSpecName: "logs") pod "ba373618-9613-48d3-9023-ce519f54fb7f" (UID: "ba373618-9613-48d3-9023-ce519f54fb7f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.107473 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba373618-9613-48d3-9023-ce519f54fb7f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ba373618-9613-48d3-9023-ce519f54fb7f" (UID: "ba373618-9613-48d3-9023-ce519f54fb7f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.107530 4981 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.107543 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/465c6840-8900-4520-b80a-aab52f45c173-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.107552 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8vl8\" (UniqueName: \"kubernetes.io/projected/465c6840-8900-4520-b80a-aab52f45c173-kube-api-access-x8vl8\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.107564 4981 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.107573 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/465c6840-8900-4520-b80a-aab52f45c173-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.107581 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/465c6840-8900-4520-b80a-aab52f45c173-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.107589 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/465c6840-8900-4520-b80a-aab52f45c173-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.110312 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-scripts" (OuterVolumeSpecName: "scripts") pod "ba373618-9613-48d3-9023-ce519f54fb7f" (UID: "ba373618-9613-48d3-9023-ce519f54fb7f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.111069 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba373618-9613-48d3-9023-ce519f54fb7f-kube-api-access-97p98" (OuterVolumeSpecName: "kube-api-access-97p98") pod "ba373618-9613-48d3-9023-ce519f54fb7f" (UID: "ba373618-9613-48d3-9023-ce519f54fb7f"). InnerVolumeSpecName "kube-api-access-97p98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.111469 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "ba373618-9613-48d3-9023-ce519f54fb7f" (UID: "ba373618-9613-48d3-9023-ce519f54fb7f"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.136058 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ba373618-9613-48d3-9023-ce519f54fb7f" (UID: "ba373618-9613-48d3-9023-ce519f54fb7f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.187400 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ba373618-9613-48d3-9023-ce519f54fb7f" (UID: "ba373618-9613-48d3-9023-ce519f54fb7f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.192705 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-config-data" (OuterVolumeSpecName: "config-data") pod "ba373618-9613-48d3-9023-ce519f54fb7f" (UID: "ba373618-9613-48d3-9023-ce519f54fb7f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.209093 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.209125 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.209135 4981 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ba373618-9613-48d3-9023-ce519f54fb7f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.209144 4981 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.209153 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.209161 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ba373618-9613-48d3-9023-ce519f54fb7f-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.209169 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97p98\" (UniqueName: \"kubernetes.io/projected/ba373618-9613-48d3-9023-ce519f54fb7f-kube-api-access-97p98\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.209178 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ba373618-9613-48d3-9023-ce519f54fb7f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.225350 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.310571 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.793528 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46","Type":"ContainerStarted","Data":"0ece829d3192440ac9d2b29babaa974ff959b9ec8978ff8acf890debd584c85e"} Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.795325 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-54f45b9c5b-drxcg" event={"ID":"465c6840-8900-4520-b80a-aab52f45c173","Type":"ContainerDied","Data":"aa234ada1f8c83e06fe3243406375953ce8cd67774514a2666a26efecc4b530a"} Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.795388 4981 scope.go:117] "RemoveContainer" containerID="9b99100a343508bba0a99a5e7f2da53ebd6eef781ca2ce91d1729c68522926f0" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.795560 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-54f45b9c5b-drxcg" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.815644 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ba373618-9613-48d3-9023-ce519f54fb7f","Type":"ContainerDied","Data":"291833be7fed236c8947a2e228d0acd10e0dff79da94771cf435760c350eb095"} Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.815964 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.941304 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-54f45b9c5b-drxcg"] Jan 28 15:23:30 crc kubenswrapper[4981]: I0128 15:23:30.973262 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-54f45b9c5b-drxcg"] Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.025836 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.037782 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.049149 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 15:23:31 crc kubenswrapper[4981]: E0128 15:23:31.050236 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba373618-9613-48d3-9023-ce519f54fb7f" containerName="glance-httpd" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.050284 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba373618-9613-48d3-9023-ce519f54fb7f" containerName="glance-httpd" Jan 28 15:23:31 crc kubenswrapper[4981]: E0128 15:23:31.050321 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465c6840-8900-4520-b80a-aab52f45c173" containerName="horizon-log" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.050353 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="465c6840-8900-4520-b80a-aab52f45c173" containerName="horizon-log" Jan 28 15:23:31 crc kubenswrapper[4981]: E0128 15:23:31.050381 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba373618-9613-48d3-9023-ce519f54fb7f" containerName="glance-log" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.050390 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba373618-9613-48d3-9023-ce519f54fb7f" containerName="glance-log" Jan 28 15:23:31 crc kubenswrapper[4981]: E0128 15:23:31.050427 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="465c6840-8900-4520-b80a-aab52f45c173" containerName="horizon" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.050435 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="465c6840-8900-4520-b80a-aab52f45c173" containerName="horizon" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.050880 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="465c6840-8900-4520-b80a-aab52f45c173" containerName="horizon-log" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.050925 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba373618-9613-48d3-9023-ce519f54fb7f" containerName="glance-httpd" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.050947 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="465c6840-8900-4520-b80a-aab52f45c173" containerName="horizon" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.050963 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba373618-9613-48d3-9023-ce519f54fb7f" containerName="glance-log" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.053814 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.057814 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.057962 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.090127 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.092621 4981 scope.go:117] "RemoveContainer" containerID="0efb8f5d9806b6b93762b01f054f77ce9c400ef76a1e729a9449091684c8c2bc" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.117501 4981 scope.go:117] "RemoveContainer" containerID="470893bbe07f2153e5d01594057a84bf22009bbbfdb4c5274c8239c021c7c3e8" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.141003 4981 scope.go:117] "RemoveContainer" containerID="d810d5ee9d29efe6acb870a3e3f5e1310ce05c5bd1e8da15cfd060324fb7a8f8" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.228795 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7888281b-1740-4d52-9752-cac22c11d44e-scripts\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.228846 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7888281b-1740-4d52-9752-cac22c11d44e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.228889 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7888281b-1740-4d52-9752-cac22c11d44e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.228933 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.228948 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7888281b-1740-4d52-9752-cac22c11d44e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.228967 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn6gb\" (UniqueName: \"kubernetes.io/projected/7888281b-1740-4d52-9752-cac22c11d44e-kube-api-access-sn6gb\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.229024 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7888281b-1740-4d52-9752-cac22c11d44e-logs\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.229039 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7888281b-1740-4d52-9752-cac22c11d44e-config-data\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.329518 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="465c6840-8900-4520-b80a-aab52f45c173" path="/var/lib/kubelet/pods/465c6840-8900-4520-b80a-aab52f45c173/volumes" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.330405 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba373618-9613-48d3-9023-ce519f54fb7f" path="/var/lib/kubelet/pods/ba373618-9613-48d3-9023-ce519f54fb7f/volumes" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.330755 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7888281b-1740-4d52-9752-cac22c11d44e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.330815 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7888281b-1740-4d52-9752-cac22c11d44e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.330837 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.330881 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn6gb\" (UniqueName: \"kubernetes.io/projected/7888281b-1740-4d52-9752-cac22c11d44e-kube-api-access-sn6gb\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.331301 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.331485 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7888281b-1740-4d52-9752-cac22c11d44e-logs\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.331510 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7888281b-1740-4d52-9752-cac22c11d44e-config-data\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.331507 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7888281b-1740-4d52-9752-cac22c11d44e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.331586 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7888281b-1740-4d52-9752-cac22c11d44e-scripts\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.331610 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7888281b-1740-4d52-9752-cac22c11d44e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.331831 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7888281b-1740-4d52-9752-cac22c11d44e-logs\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.337833 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7888281b-1740-4d52-9752-cac22c11d44e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.338793 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7888281b-1740-4d52-9752-cac22c11d44e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.339799 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7888281b-1740-4d52-9752-cac22c11d44e-config-data\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.354606 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7888281b-1740-4d52-9752-cac22c11d44e-scripts\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.371215 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn6gb\" (UniqueName: \"kubernetes.io/projected/7888281b-1740-4d52-9752-cac22c11d44e-kube-api-access-sn6gb\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.375936 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"7888281b-1740-4d52-9752-cac22c11d44e\") " pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.394520 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.627222 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.830712 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46","Type":"ContainerStarted","Data":"e68bed12f94650b5a4f28f2cf0b0843d1b1db74a65bfe612ab0e70ecfc6c99c8"} Jan 28 15:23:31 crc kubenswrapper[4981]: I0128 15:23:31.948128 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 15:23:32 crc kubenswrapper[4981]: I0128 15:23:32.846385 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46","Type":"ContainerStarted","Data":"b2c594831f6d498d299cde1375c09c2d1c77a97e84244c84afdae6e480592713"} Jan 28 15:23:32 crc kubenswrapper[4981]: I0128 15:23:32.846805 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="ceilometer-central-agent" containerID="cri-o://38f60aa5b6b381aecfa0426a25c09e04cf54a8863492ccfb1fe5dd9fc529cf0d" gracePeriod=30 Jan 28 15:23:32 crc kubenswrapper[4981]: I0128 15:23:32.846882 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 15:23:32 crc kubenswrapper[4981]: I0128 15:23:32.847197 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="proxy-httpd" containerID="cri-o://b2c594831f6d498d299cde1375c09c2d1c77a97e84244c84afdae6e480592713" gracePeriod=30 Jan 28 15:23:32 crc kubenswrapper[4981]: I0128 15:23:32.847246 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="sg-core" containerID="cri-o://e68bed12f94650b5a4f28f2cf0b0843d1b1db74a65bfe612ab0e70ecfc6c99c8" gracePeriod=30 Jan 28 15:23:32 crc kubenswrapper[4981]: I0128 15:23:32.847284 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="ceilometer-notification-agent" containerID="cri-o://0ece829d3192440ac9d2b29babaa974ff959b9ec8978ff8acf890debd584c85e" gracePeriod=30 Jan 28 15:23:32 crc kubenswrapper[4981]: I0128 15:23:32.858417 4981 generic.go:334] "Generic (PLEG): container finished" podID="0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" containerID="d460fc23afba8c3d14a8a7e4e28bbd2cfcf4a79c8689e2e7ef1614c95984f872" exitCode=0 Jan 28 15:23:32 crc kubenswrapper[4981]: I0128 15:23:32.858508 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf","Type":"ContainerDied","Data":"d460fc23afba8c3d14a8a7e4e28bbd2cfcf4a79c8689e2e7ef1614c95984f872"} Jan 28 15:23:32 crc kubenswrapper[4981]: I0128 15:23:32.861517 4981 generic.go:334] "Generic (PLEG): container finished" podID="5a747315-c181-4459-ae1d-3c0c5252efb7" containerID="ef41d371bc38090af3ea5f5d5b836da62de3f1eb3c0855d616e9eebdaa5e2145" exitCode=0 Jan 28 15:23:32 crc kubenswrapper[4981]: I0128 15:23:32.861570 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-h8htg" event={"ID":"5a747315-c181-4459-ae1d-3c0c5252efb7","Type":"ContainerDied","Data":"ef41d371bc38090af3ea5f5d5b836da62de3f1eb3c0855d616e9eebdaa5e2145"} Jan 28 15:23:32 crc kubenswrapper[4981]: I0128 15:23:32.863589 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7888281b-1740-4d52-9752-cac22c11d44e","Type":"ContainerStarted","Data":"c4dbf54cf3e013a2536c492f2718cf431cf9d5bf5006b1abf7f78ed026ce03a4"} Jan 28 15:23:32 crc kubenswrapper[4981]: I0128 15:23:32.863612 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7888281b-1740-4d52-9752-cac22c11d44e","Type":"ContainerStarted","Data":"1591c3b50feff3089efb381e5420b91745113c29289a08d60576d9c7b35eafe7"} Jan 28 15:23:32 crc kubenswrapper[4981]: I0128 15:23:32.907394 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.879650361 podStartE2EDuration="11.907376871s" podCreationTimestamp="2026-01-28 15:23:21 +0000 UTC" firstStartedPulling="2026-01-28 15:23:22.526652831 +0000 UTC m=+1213.978811062" lastFinishedPulling="2026-01-28 15:23:32.554379331 +0000 UTC m=+1224.006537572" observedRunningTime="2026-01-28 15:23:32.896423906 +0000 UTC m=+1224.348582147" watchObservedRunningTime="2026-01-28 15:23:32.907376871 +0000 UTC m=+1224.359535112" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.222745 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.363548 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-scripts\") pod \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.363682 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-internal-tls-certs\") pod \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.363747 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-config-data\") pod \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.363772 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.363882 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b5fd\" (UniqueName: \"kubernetes.io/projected/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-kube-api-access-7b5fd\") pod \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.363930 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-logs\") pod \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.363974 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-httpd-run\") pod \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.364024 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-combined-ca-bundle\") pod \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\" (UID: \"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf\") " Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.364415 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-logs" (OuterVolumeSpecName: "logs") pod "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" (UID: "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.364616 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" (UID: "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.368612 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.368637 4981 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.371675 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-kube-api-access-7b5fd" (OuterVolumeSpecName: "kube-api-access-7b5fd") pod "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" (UID: "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf"). InnerVolumeSpecName "kube-api-access-7b5fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.371894 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-scripts" (OuterVolumeSpecName: "scripts") pod "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" (UID: "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.376666 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" (UID: "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.420087 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-config-data" (OuterVolumeSpecName: "config-data") pod "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" (UID: "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.421139 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" (UID: "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.435835 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" (UID: "0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.469829 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b5fd\" (UniqueName: \"kubernetes.io/projected/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-kube-api-access-7b5fd\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.469862 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.469872 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.469882 4981 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.469893 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.469917 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.489961 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.571941 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.881703 4981 generic.go:334] "Generic (PLEG): container finished" podID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerID="e68bed12f94650b5a4f28f2cf0b0843d1b1db74a65bfe612ab0e70ecfc6c99c8" exitCode=2 Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.881739 4981 generic.go:334] "Generic (PLEG): container finished" podID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerID="0ece829d3192440ac9d2b29babaa974ff959b9ec8978ff8acf890debd584c85e" exitCode=0 Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.881786 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46","Type":"ContainerDied","Data":"e68bed12f94650b5a4f28f2cf0b0843d1b1db74a65bfe612ab0e70ecfc6c99c8"} Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.881878 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46","Type":"ContainerDied","Data":"0ece829d3192440ac9d2b29babaa974ff959b9ec8978ff8acf890debd584c85e"} Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.885266 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.885264 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf","Type":"ContainerDied","Data":"9a4942493b50881421cbfe4857c7b8fa1be0cb02a3604f45817305ed2f68c764"} Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.885352 4981 scope.go:117] "RemoveContainer" containerID="d460fc23afba8c3d14a8a7e4e28bbd2cfcf4a79c8689e2e7ef1614c95984f872" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.888647 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"7888281b-1740-4d52-9752-cac22c11d44e","Type":"ContainerStarted","Data":"90757e6d9b53715ed39e5848251ba15ad3638dcb99678815595ede5cacd00533"} Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.950208 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.9501720479999998 podStartE2EDuration="3.950172048s" podCreationTimestamp="2026-01-28 15:23:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:33.922651894 +0000 UTC m=+1225.374810125" watchObservedRunningTime="2026-01-28 15:23:33.950172048 +0000 UTC m=+1225.402330289" Jan 28 15:23:33 crc kubenswrapper[4981]: I0128 15:23:33.964964 4981 scope.go:117] "RemoveContainer" containerID="bebd08ae06e0b4910da4cd505ee93d126322a0a659d8d14bc8e0164e6c8cbccc" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.006497 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.029217 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.038036 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 15:23:34 crc kubenswrapper[4981]: E0128 15:23:34.038517 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" containerName="glance-httpd" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.038530 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" containerName="glance-httpd" Jan 28 15:23:34 crc kubenswrapper[4981]: E0128 15:23:34.038544 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" containerName="glance-log" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.038550 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" containerName="glance-log" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.038706 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" containerName="glance-httpd" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.038727 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" containerName="glance-log" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.039680 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.045787 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.045852 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.050759 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.189716 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.190356 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce43940a-33fa-4da9-a910-a57dc6230e57-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.190445 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce43940a-33fa-4da9-a910-a57dc6230e57-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.190500 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j69mb\" (UniqueName: \"kubernetes.io/projected/ce43940a-33fa-4da9-a910-a57dc6230e57-kube-api-access-j69mb\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.190537 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce43940a-33fa-4da9-a910-a57dc6230e57-logs\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.190561 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce43940a-33fa-4da9-a910-a57dc6230e57-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.190592 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce43940a-33fa-4da9-a910-a57dc6230e57-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.190635 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce43940a-33fa-4da9-a910-a57dc6230e57-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.292788 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.292882 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce43940a-33fa-4da9-a910-a57dc6230e57-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.292937 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce43940a-33fa-4da9-a910-a57dc6230e57-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.292972 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j69mb\" (UniqueName: \"kubernetes.io/projected/ce43940a-33fa-4da9-a910-a57dc6230e57-kube-api-access-j69mb\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.292997 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce43940a-33fa-4da9-a910-a57dc6230e57-logs\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.293020 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce43940a-33fa-4da9-a910-a57dc6230e57-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.293042 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce43940a-33fa-4da9-a910-a57dc6230e57-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.293068 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce43940a-33fa-4da9-a910-a57dc6230e57-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.293536 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.294117 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce43940a-33fa-4da9-a910-a57dc6230e57-logs\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.294507 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce43940a-33fa-4da9-a910-a57dc6230e57-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.298186 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce43940a-33fa-4da9-a910-a57dc6230e57-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.298234 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce43940a-33fa-4da9-a910-a57dc6230e57-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.298533 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce43940a-33fa-4da9-a910-a57dc6230e57-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.299105 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce43940a-33fa-4da9-a910-a57dc6230e57-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.312688 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j69mb\" (UniqueName: \"kubernetes.io/projected/ce43940a-33fa-4da9-a910-a57dc6230e57-kube-api-access-j69mb\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.324880 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"ce43940a-33fa-4da9-a910-a57dc6230e57\") " pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.362144 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.432672 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-h8htg" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.496623 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-db-sync-config-data\") pod \"5a747315-c181-4459-ae1d-3c0c5252efb7\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.497096 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-scripts\") pod \"5a747315-c181-4459-ae1d-3c0c5252efb7\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.497403 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-combined-ca-bundle\") pod \"5a747315-c181-4459-ae1d-3c0c5252efb7\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.497453 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rbls\" (UniqueName: \"kubernetes.io/projected/5a747315-c181-4459-ae1d-3c0c5252efb7-kube-api-access-4rbls\") pod \"5a747315-c181-4459-ae1d-3c0c5252efb7\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.497501 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a747315-c181-4459-ae1d-3c0c5252efb7-etc-machine-id\") pod \"5a747315-c181-4459-ae1d-3c0c5252efb7\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.497591 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-config-data\") pod \"5a747315-c181-4459-ae1d-3c0c5252efb7\" (UID: \"5a747315-c181-4459-ae1d-3c0c5252efb7\") " Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.510573 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-scripts" (OuterVolumeSpecName: "scripts") pod "5a747315-c181-4459-ae1d-3c0c5252efb7" (UID: "5a747315-c181-4459-ae1d-3c0c5252efb7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.512346 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a747315-c181-4459-ae1d-3c0c5252efb7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5a747315-c181-4459-ae1d-3c0c5252efb7" (UID: "5a747315-c181-4459-ae1d-3c0c5252efb7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.514997 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5a747315-c181-4459-ae1d-3c0c5252efb7" (UID: "5a747315-c181-4459-ae1d-3c0c5252efb7"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.519398 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a747315-c181-4459-ae1d-3c0c5252efb7-kube-api-access-4rbls" (OuterVolumeSpecName: "kube-api-access-4rbls") pod "5a747315-c181-4459-ae1d-3c0c5252efb7" (UID: "5a747315-c181-4459-ae1d-3c0c5252efb7"). InnerVolumeSpecName "kube-api-access-4rbls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.535821 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a747315-c181-4459-ae1d-3c0c5252efb7" (UID: "5a747315-c181-4459-ae1d-3c0c5252efb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.569700 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-config-data" (OuterVolumeSpecName: "config-data") pod "5a747315-c181-4459-ae1d-3c0c5252efb7" (UID: "5a747315-c181-4459-ae1d-3c0c5252efb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.600380 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.600427 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rbls\" (UniqueName: \"kubernetes.io/projected/5a747315-c181-4459-ae1d-3c0c5252efb7-kube-api-access-4rbls\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.600440 4981 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5a747315-c181-4459-ae1d-3c0c5252efb7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.601359 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.601375 4981 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.601387 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a747315-c181-4459-ae1d-3c0c5252efb7-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.901250 4981 generic.go:334] "Generic (PLEG): container finished" podID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerID="38f60aa5b6b381aecfa0426a25c09e04cf54a8863492ccfb1fe5dd9fc529cf0d" exitCode=0 Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.901327 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46","Type":"ContainerDied","Data":"38f60aa5b6b381aecfa0426a25c09e04cf54a8863492ccfb1fe5dd9fc529cf0d"} Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.905241 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-h8htg" event={"ID":"5a747315-c181-4459-ae1d-3c0c5252efb7","Type":"ContainerDied","Data":"c38114ba03bd3260954a61211889bd6620bfb71f24289d7ada18824cbf3b3bbe"} Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.905296 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c38114ba03bd3260954a61211889bd6620bfb71f24289d7ada18824cbf3b3bbe" Jan 28 15:23:34 crc kubenswrapper[4981]: I0128 15:23:34.905302 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-h8htg" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.002871 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.255054 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 15:23:35 crc kubenswrapper[4981]: E0128 15:23:35.255693 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a747315-c181-4459-ae1d-3c0c5252efb7" containerName="cinder-db-sync" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.255711 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a747315-c181-4459-ae1d-3c0c5252efb7" containerName="cinder-db-sync" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.255872 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a747315-c181-4459-ae1d-3c0c5252efb7" containerName="cinder-db-sync" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.256688 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.262525 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.262792 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-zxwcf" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.263138 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.263350 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.274412 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.313145 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-config-data\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.313242 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.313319 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae859ea5-0c04-4736-9e30-6bf1337dd21d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.313339 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-scripts\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.313442 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pz4p\" (UniqueName: \"kubernetes.io/projected/ae859ea5-0c04-4736-9e30-6bf1337dd21d-kube-api-access-2pz4p\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.313962 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.332957 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf" path="/var/lib/kubelet/pods/0baa7e2a-4636-4237-9d38-7c5e7d0ef8cf/volumes" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.415549 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pz4p\" (UniqueName: \"kubernetes.io/projected/ae859ea5-0c04-4736-9e30-6bf1337dd21d-kube-api-access-2pz4p\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.415787 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.415931 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-config-data\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.416039 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.416075 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae859ea5-0c04-4736-9e30-6bf1337dd21d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.416098 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-scripts\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.416999 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae859ea5-0c04-4736-9e30-6bf1337dd21d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.420938 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.425862 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-scripts\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.429299 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-config-data\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.432016 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.434923 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pz4p\" (UniqueName: \"kubernetes.io/projected/ae859ea5-0c04-4736-9e30-6bf1337dd21d-kube-api-access-2pz4p\") pod \"cinder-scheduler-0\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.586135 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.684237 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-n4p6d"] Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.686130 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.694830 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-n4p6d"] Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.723498 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.723770 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-config\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.723920 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.724281 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.724407 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.724561 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kfpg\" (UniqueName: \"kubernetes.io/projected/30426464-1d4b-4ac9-86c1-7d4e458000ba-kube-api-access-9kfpg\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.833571 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.836527 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.836611 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-config\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.836635 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.836733 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.836792 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.836881 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kfpg\" (UniqueName: \"kubernetes.io/projected/30426464-1d4b-4ac9-86c1-7d4e458000ba-kube-api-access-9kfpg\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.836891 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.839765 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-dns-svc\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.841279 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.841646 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-ovsdbserver-sb\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.845746 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-dns-swift-storage-0\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.854611 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-config\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.864294 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-ovsdbserver-nb\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.877557 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.879044 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kfpg\" (UniqueName: \"kubernetes.io/projected/30426464-1d4b-4ac9-86c1-7d4e458000ba-kube-api-access-9kfpg\") pod \"dnsmasq-dns-69c986f6d7-n4p6d\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.922369 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ce43940a-33fa-4da9-a910-a57dc6230e57","Type":"ContainerStarted","Data":"bffa58b6832263390041994cea40faca75e8529f7748f8793117497c2b4a5fc7"} Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.922414 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ce43940a-33fa-4da9-a910-a57dc6230e57","Type":"ContainerStarted","Data":"cee08e9353ce08b1ad2993b41b1427a6ce0a00a873155e8abae651df3a0ef3f4"} Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.926692 4981 generic.go:334] "Generic (PLEG): container finished" podID="150ae7b2-4b64-48c7-86b3-71d7841afba3" containerID="4daa23ca251918f431c71796a106f059b31f9684e7815db0ea87cfcbd133962d" exitCode=0 Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.926768 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-h9wgp" event={"ID":"150ae7b2-4b64-48c7-86b3-71d7841afba3","Type":"ContainerDied","Data":"4daa23ca251918f431c71796a106f059b31f9684e7815db0ea87cfcbd133962d"} Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.942814 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1306cbc-c01f-4393-93b4-bd40a4165953-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.943225 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-config-data-custom\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.943321 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94hkv\" (UniqueName: \"kubernetes.io/projected/b1306cbc-c01f-4393-93b4-bd40a4165953-kube-api-access-94hkv\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.943408 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.943515 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-config-data\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.943597 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-scripts\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:35 crc kubenswrapper[4981]: I0128 15:23:35.943691 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1306cbc-c01f-4393-93b4-bd40a4165953-logs\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.012328 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.049484 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1306cbc-c01f-4393-93b4-bd40a4165953-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.049534 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-config-data-custom\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.049566 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94hkv\" (UniqueName: \"kubernetes.io/projected/b1306cbc-c01f-4393-93b4-bd40a4165953-kube-api-access-94hkv\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.049591 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.049644 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-config-data\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.049648 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1306cbc-c01f-4393-93b4-bd40a4165953-etc-machine-id\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.049671 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-scripts\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.049757 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1306cbc-c01f-4393-93b4-bd40a4165953-logs\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.050336 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1306cbc-c01f-4393-93b4-bd40a4165953-logs\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.054034 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-config-data-custom\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.054812 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.055095 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-scripts\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.066382 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94hkv\" (UniqueName: \"kubernetes.io/projected/b1306cbc-c01f-4393-93b4-bd40a4165953-kube-api-access-94hkv\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.074294 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-config-data\") pod \"cinder-api-0\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.185170 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.238341 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 15:23:36 crc kubenswrapper[4981]: W0128 15:23:36.248356 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae859ea5_0c04_4736_9e30_6bf1337dd21d.slice/crio-e3cc00b0b3586fcf86777d2aaf8d1fb4d9273b296a929e29c0f8c7dd47284bbe WatchSource:0}: Error finding container e3cc00b0b3586fcf86777d2aaf8d1fb4d9273b296a929e29c0f8c7dd47284bbe: Status 404 returned error can't find the container with id e3cc00b0b3586fcf86777d2aaf8d1fb4d9273b296a929e29c0f8c7dd47284bbe Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.547778 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-n4p6d"] Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.604801 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.938611 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b1306cbc-c01f-4393-93b4-bd40a4165953","Type":"ContainerStarted","Data":"7e6e4c8d5bbfc8218f2d624080406014ca47fc0cc547f9c3da80cb2a83b1f240"} Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.941681 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ce43940a-33fa-4da9-a910-a57dc6230e57","Type":"ContainerStarted","Data":"1d5d5e644ebb0a4f8f6713f2a111d7efe6da8fd7c13678ef21e6c437cb13d16b"} Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.945609 4981 generic.go:334] "Generic (PLEG): container finished" podID="30426464-1d4b-4ac9-86c1-7d4e458000ba" containerID="8f27aaa24d3c338124a0f9d903db2ce708e43235cc727f3a4f6124bd25e82b26" exitCode=0 Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.945723 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" event={"ID":"30426464-1d4b-4ac9-86c1-7d4e458000ba","Type":"ContainerDied","Data":"8f27aaa24d3c338124a0f9d903db2ce708e43235cc727f3a4f6124bd25e82b26"} Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.945768 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" event={"ID":"30426464-1d4b-4ac9-86c1-7d4e458000ba","Type":"ContainerStarted","Data":"dc75d6b66c1047511183f1191e758489ee1cc8c7695f81e97b0b1def93750068"} Jan 28 15:23:36 crc kubenswrapper[4981]: I0128 15:23:36.949678 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae859ea5-0c04-4736-9e30-6bf1337dd21d","Type":"ContainerStarted","Data":"e3cc00b0b3586fcf86777d2aaf8d1fb4d9273b296a929e29c0f8c7dd47284bbe"} Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.013537 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.013519972 podStartE2EDuration="4.013519972s" podCreationTimestamp="2026-01-28 15:23:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:36.96610242 +0000 UTC m=+1228.418260661" watchObservedRunningTime="2026-01-28 15:23:37.013519972 +0000 UTC m=+1228.465678213" Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.370640 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-h9wgp" Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.480525 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/150ae7b2-4b64-48c7-86b3-71d7841afba3-combined-ca-bundle\") pod \"150ae7b2-4b64-48c7-86b3-71d7841afba3\" (UID: \"150ae7b2-4b64-48c7-86b3-71d7841afba3\") " Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.480648 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4xdk\" (UniqueName: \"kubernetes.io/projected/150ae7b2-4b64-48c7-86b3-71d7841afba3-kube-api-access-g4xdk\") pod \"150ae7b2-4b64-48c7-86b3-71d7841afba3\" (UID: \"150ae7b2-4b64-48c7-86b3-71d7841afba3\") " Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.480886 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/150ae7b2-4b64-48c7-86b3-71d7841afba3-config\") pod \"150ae7b2-4b64-48c7-86b3-71d7841afba3\" (UID: \"150ae7b2-4b64-48c7-86b3-71d7841afba3\") " Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.503569 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/150ae7b2-4b64-48c7-86b3-71d7841afba3-kube-api-access-g4xdk" (OuterVolumeSpecName: "kube-api-access-g4xdk") pod "150ae7b2-4b64-48c7-86b3-71d7841afba3" (UID: "150ae7b2-4b64-48c7-86b3-71d7841afba3"). InnerVolumeSpecName "kube-api-access-g4xdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.527539 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/150ae7b2-4b64-48c7-86b3-71d7841afba3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "150ae7b2-4b64-48c7-86b3-71d7841afba3" (UID: "150ae7b2-4b64-48c7-86b3-71d7841afba3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.537817 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/150ae7b2-4b64-48c7-86b3-71d7841afba3-config" (OuterVolumeSpecName: "config") pod "150ae7b2-4b64-48c7-86b3-71d7841afba3" (UID: "150ae7b2-4b64-48c7-86b3-71d7841afba3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.543071 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.583547 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4xdk\" (UniqueName: \"kubernetes.io/projected/150ae7b2-4b64-48c7-86b3-71d7841afba3-kube-api-access-g4xdk\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.583582 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/150ae7b2-4b64-48c7-86b3-71d7841afba3-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.583593 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/150ae7b2-4b64-48c7-86b3-71d7841afba3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.983946 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" event={"ID":"30426464-1d4b-4ac9-86c1-7d4e458000ba","Type":"ContainerStarted","Data":"d68239935d044773a1dd1ede1a7e92cb9900e54235df046fc9895068c8d81865"} Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.985349 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.994206 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-h9wgp" event={"ID":"150ae7b2-4b64-48c7-86b3-71d7841afba3","Type":"ContainerDied","Data":"613412e27f2b9f40ad052e3fd85249747f27e4ea02593239fabaeeb143bb57ac"} Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.994241 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="613412e27f2b9f40ad052e3fd85249747f27e4ea02593239fabaeeb143bb57ac" Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.994296 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-h9wgp" Jan 28 15:23:37 crc kubenswrapper[4981]: I0128 15:23:37.997264 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b1306cbc-c01f-4393-93b4-bd40a4165953","Type":"ContainerStarted","Data":"48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c"} Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.007180 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" podStartSLOduration=3.007161003 podStartE2EDuration="3.007161003s" podCreationTimestamp="2026-01-28 15:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:38.004922075 +0000 UTC m=+1229.457080316" watchObservedRunningTime="2026-01-28 15:23:38.007161003 +0000 UTC m=+1229.459319244" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.161406 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-n4p6d"] Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.221315 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-z7d4r"] Jan 28 15:23:38 crc kubenswrapper[4981]: E0128 15:23:38.224778 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150ae7b2-4b64-48c7-86b3-71d7841afba3" containerName="neutron-db-sync" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.224817 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="150ae7b2-4b64-48c7-86b3-71d7841afba3" containerName="neutron-db-sync" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.225340 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="150ae7b2-4b64-48c7-86b3-71d7841afba3" containerName="neutron-db-sync" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.265083 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-z7d4r"] Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.265224 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.304435 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.304800 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-67cc6fc44d-7stvd"] Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.304955 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.305050 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-config\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.305142 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.305260 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4s6r\" (UniqueName: \"kubernetes.io/projected/331416fc-f914-4ed7-8326-ff3db72c5246-kube-api-access-s4s6r\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.305357 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-dns-svc\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.306495 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.313329 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.313665 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-9qvbr" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.313830 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.314728 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.369295 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67cc6fc44d-7stvd"] Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.406905 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-config\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.406965 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.407007 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4s6r\" (UniqueName: \"kubernetes.io/projected/331416fc-f914-4ed7-8326-ff3db72c5246-kube-api-access-s4s6r\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.407042 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-dns-svc\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.407106 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.407141 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.408364 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-ovsdbserver-nb\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.408390 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-dns-svc\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.408692 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-config\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.409655 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-dns-swift-storage-0\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.409795 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-ovsdbserver-sb\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.442670 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4s6r\" (UniqueName: \"kubernetes.io/projected/331416fc-f914-4ed7-8326-ff3db72c5246-kube-api-access-s4s6r\") pod \"dnsmasq-dns-5784cf869f-z7d4r\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.509976 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-httpd-config\") pod \"neutron-67cc6fc44d-7stvd\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.510290 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-config\") pod \"neutron-67cc6fc44d-7stvd\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.510375 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-ovndb-tls-certs\") pod \"neutron-67cc6fc44d-7stvd\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.510437 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nqzl\" (UniqueName: \"kubernetes.io/projected/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-kube-api-access-4nqzl\") pod \"neutron-67cc6fc44d-7stvd\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.510467 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-combined-ca-bundle\") pod \"neutron-67cc6fc44d-7stvd\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.615573 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-httpd-config\") pod \"neutron-67cc6fc44d-7stvd\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.616057 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-config\") pod \"neutron-67cc6fc44d-7stvd\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.616088 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-ovndb-tls-certs\") pod \"neutron-67cc6fc44d-7stvd\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.616107 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nqzl\" (UniqueName: \"kubernetes.io/projected/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-kube-api-access-4nqzl\") pod \"neutron-67cc6fc44d-7stvd\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.616128 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-combined-ca-bundle\") pod \"neutron-67cc6fc44d-7stvd\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.620993 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-combined-ca-bundle\") pod \"neutron-67cc6fc44d-7stvd\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.624732 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-httpd-config\") pod \"neutron-67cc6fc44d-7stvd\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.631019 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-ovndb-tls-certs\") pod \"neutron-67cc6fc44d-7stvd\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.633766 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-config\") pod \"neutron-67cc6fc44d-7stvd\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.640987 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.658845 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nqzl\" (UniqueName: \"kubernetes.io/projected/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-kube-api-access-4nqzl\") pod \"neutron-67cc6fc44d-7stvd\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:38 crc kubenswrapper[4981]: I0128 15:23:38.659804 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.021649 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae859ea5-0c04-4736-9e30-6bf1337dd21d","Type":"ContainerStarted","Data":"bd1ec6ad954c7a01efa3740296798499ea2fee68df00d0aa9ddd42177d6096b0"} Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.034603 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b1306cbc-c01f-4393-93b4-bd40a4165953","Type":"ContainerStarted","Data":"7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2"} Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.034655 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b1306cbc-c01f-4393-93b4-bd40a4165953" containerName="cinder-api-log" containerID="cri-o://48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c" gracePeriod=30 Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.034776 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="b1306cbc-c01f-4393-93b4-bd40a4165953" containerName="cinder-api" containerID="cri-o://7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2" gracePeriod=30 Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.034809 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.059754 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.059731096 podStartE2EDuration="4.059731096s" podCreationTimestamp="2026-01-28 15:23:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:39.056673026 +0000 UTC m=+1230.508831267" watchObservedRunningTime="2026-01-28 15:23:39.059731096 +0000 UTC m=+1230.511889337" Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.226780 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-z7d4r"] Jan 28 15:23:39 crc kubenswrapper[4981]: W0128 15:23:39.449927 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod331416fc_f914_4ed7_8326_ff3db72c5246.slice/crio-b1f64d7575d8909b46d18d98c8d1daa1dc3d0b88b550e7c08a09e283f70f1a61 WatchSource:0}: Error finding container b1f64d7575d8909b46d18d98c8d1daa1dc3d0b88b550e7c08a09e283f70f1a61: Status 404 returned error can't find the container with id b1f64d7575d8909b46d18d98c8d1daa1dc3d0b88b550e7c08a09e283f70f1a61 Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.520589 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-67cc6fc44d-7stvd"] Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.782878 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-lkbq5"] Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.807635 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lkbq5" Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.883178 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lkbq5"] Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.955481 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6vfxh"] Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.961036 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6vfxh" Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.974349 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6vfxh"] Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.977357 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4a209b-c995-42fa-9a1c-82a1c9c60e91-operator-scripts\") pod \"nova-api-db-create-lkbq5\" (UID: \"6d4a209b-c995-42fa-9a1c-82a1c9c60e91\") " pod="openstack/nova-api-db-create-lkbq5" Jan 28 15:23:39 crc kubenswrapper[4981]: I0128 15:23:39.977450 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d8xm\" (UniqueName: \"kubernetes.io/projected/6d4a209b-c995-42fa-9a1c-82a1c9c60e91-kube-api-access-2d8xm\") pod \"nova-api-db-create-lkbq5\" (UID: \"6d4a209b-c995-42fa-9a1c-82a1c9c60e91\") " pod="openstack/nova-api-db-create-lkbq5" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.015257 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-41b8-account-create-update-s8rrk"] Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.016798 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-41b8-account-create-update-s8rrk" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.020856 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.043772 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-41b8-account-create-update-s8rrk"] Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.092527 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d8xm\" (UniqueName: \"kubernetes.io/projected/6d4a209b-c995-42fa-9a1c-82a1c9c60e91-kube-api-access-2d8xm\") pod \"nova-api-db-create-lkbq5\" (UID: \"6d4a209b-c995-42fa-9a1c-82a1c9c60e91\") " pod="openstack/nova-api-db-create-lkbq5" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.092595 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1994b16c-0ff3-4534-be1a-fcc718dd6eed-operator-scripts\") pod \"nova-cell0-db-create-6vfxh\" (UID: \"1994b16c-0ff3-4534-be1a-fcc718dd6eed\") " pod="openstack/nova-cell0-db-create-6vfxh" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.092964 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlx24\" (UniqueName: \"kubernetes.io/projected/1994b16c-0ff3-4534-be1a-fcc718dd6eed-kube-api-access-xlx24\") pod \"nova-cell0-db-create-6vfxh\" (UID: \"1994b16c-0ff3-4534-be1a-fcc718dd6eed\") " pod="openstack/nova-cell0-db-create-6vfxh" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.092993 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4a209b-c995-42fa-9a1c-82a1c9c60e91-operator-scripts\") pod \"nova-api-db-create-lkbq5\" (UID: \"6d4a209b-c995-42fa-9a1c-82a1c9c60e91\") " pod="openstack/nova-api-db-create-lkbq5" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.093937 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4a209b-c995-42fa-9a1c-82a1c9c60e91-operator-scripts\") pod \"nova-api-db-create-lkbq5\" (UID: \"6d4a209b-c995-42fa-9a1c-82a1c9c60e91\") " pod="openstack/nova-api-db-create-lkbq5" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.119544 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d8xm\" (UniqueName: \"kubernetes.io/projected/6d4a209b-c995-42fa-9a1c-82a1c9c60e91-kube-api-access-2d8xm\") pod \"nova-api-db-create-lkbq5\" (UID: \"6d4a209b-c995-42fa-9a1c-82a1c9c60e91\") " pod="openstack/nova-api-db-create-lkbq5" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.120902 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-czcg6"] Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.122631 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-czcg6" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.131575 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-czcg6"] Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.145427 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc6fc44d-7stvd" event={"ID":"bd5d9602-d2bd-4dfd-9249-41f61260b5eb","Type":"ContainerStarted","Data":"813330d2834717ca327ca40dd470f63cab153b5cf018d494be56de42bc57af41"} Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.145502 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc6fc44d-7stvd" event={"ID":"bd5d9602-d2bd-4dfd-9249-41f61260b5eb","Type":"ContainerStarted","Data":"45721bd76663858b588b6ff49c0b0b8a8a08d3adafb2ba6509ff6db0991c6ca4"} Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.147990 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lkbq5" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.161733 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-733a-account-create-update-x8f26"] Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.166265 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-733a-account-create-update-x8f26" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.174101 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-733a-account-create-update-x8f26"] Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.182394 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.194829 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a51d19b-f2d1-4164-bb39-6da466eb115c-operator-scripts\") pod \"nova-api-41b8-account-create-update-s8rrk\" (UID: \"2a51d19b-f2d1-4164-bb39-6da466eb115c\") " pod="openstack/nova-api-41b8-account-create-update-s8rrk" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.195115 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlx24\" (UniqueName: \"kubernetes.io/projected/1994b16c-0ff3-4534-be1a-fcc718dd6eed-kube-api-access-xlx24\") pod \"nova-cell0-db-create-6vfxh\" (UID: \"1994b16c-0ff3-4534-be1a-fcc718dd6eed\") " pod="openstack/nova-cell0-db-create-6vfxh" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.195206 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxkdf\" (UniqueName: \"kubernetes.io/projected/2a51d19b-f2d1-4164-bb39-6da466eb115c-kube-api-access-kxkdf\") pod \"nova-api-41b8-account-create-update-s8rrk\" (UID: \"2a51d19b-f2d1-4164-bb39-6da466eb115c\") " pod="openstack/nova-api-41b8-account-create-update-s8rrk" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.195422 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1994b16c-0ff3-4534-be1a-fcc718dd6eed-operator-scripts\") pod \"nova-cell0-db-create-6vfxh\" (UID: \"1994b16c-0ff3-4534-be1a-fcc718dd6eed\") " pod="openstack/nova-cell0-db-create-6vfxh" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.197489 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1994b16c-0ff3-4534-be1a-fcc718dd6eed-operator-scripts\") pod \"nova-cell0-db-create-6vfxh\" (UID: \"1994b16c-0ff3-4534-be1a-fcc718dd6eed\") " pod="openstack/nova-cell0-db-create-6vfxh" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.231164 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlx24\" (UniqueName: \"kubernetes.io/projected/1994b16c-0ff3-4534-be1a-fcc718dd6eed-kube-api-access-xlx24\") pod \"nova-cell0-db-create-6vfxh\" (UID: \"1994b16c-0ff3-4534-be1a-fcc718dd6eed\") " pod="openstack/nova-cell0-db-create-6vfxh" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.242627 4981 generic.go:334] "Generic (PLEG): container finished" podID="331416fc-f914-4ed7-8326-ff3db72c5246" containerID="5f3d403873b8c2defde5017c32acc5d63bd22b9d80e556c87a060fb4dee0a46e" exitCode=0 Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.242988 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" event={"ID":"331416fc-f914-4ed7-8326-ff3db72c5246","Type":"ContainerDied","Data":"5f3d403873b8c2defde5017c32acc5d63bd22b9d80e556c87a060fb4dee0a46e"} Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.243543 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" event={"ID":"331416fc-f914-4ed7-8326-ff3db72c5246","Type":"ContainerStarted","Data":"b1f64d7575d8909b46d18d98c8d1daa1dc3d0b88b550e7c08a09e283f70f1a61"} Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.270551 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.271620 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae859ea5-0c04-4736-9e30-6bf1337dd21d","Type":"ContainerStarted","Data":"c2027115c3035a634ec281c625d1cc45c30403a0f167041e2fa1d88f9c6340c1"} Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.278824 4981 generic.go:334] "Generic (PLEG): container finished" podID="b1306cbc-c01f-4393-93b4-bd40a4165953" containerID="7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2" exitCode=0 Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.278857 4981 generic.go:334] "Generic (PLEG): container finished" podID="b1306cbc-c01f-4393-93b4-bd40a4165953" containerID="48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c" exitCode=143 Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.279097 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" podUID="30426464-1d4b-4ac9-86c1-7d4e458000ba" containerName="dnsmasq-dns" containerID="cri-o://d68239935d044773a1dd1ede1a7e92cb9900e54235df046fc9895068c8d81865" gracePeriod=10 Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.279232 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.279849 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b1306cbc-c01f-4393-93b4-bd40a4165953","Type":"ContainerDied","Data":"7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2"} Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.279882 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b1306cbc-c01f-4393-93b4-bd40a4165953","Type":"ContainerDied","Data":"48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c"} Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.279893 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"b1306cbc-c01f-4393-93b4-bd40a4165953","Type":"ContainerDied","Data":"7e6e4c8d5bbfc8218f2d624080406014ca47fc0cc547f9c3da80cb2a83b1f240"} Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.280005 4981 scope.go:117] "RemoveContainer" containerID="7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.298129 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73d5db70-d787-4d5b-9c2c-64859f2acf0c-operator-scripts\") pod \"nova-cell1-db-create-czcg6\" (UID: \"73d5db70-d787-4d5b-9c2c-64859f2acf0c\") " pod="openstack/nova-cell1-db-create-czcg6" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.298206 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a51d19b-f2d1-4164-bb39-6da466eb115c-operator-scripts\") pod \"nova-api-41b8-account-create-update-s8rrk\" (UID: \"2a51d19b-f2d1-4164-bb39-6da466eb115c\") " pod="openstack/nova-api-41b8-account-create-update-s8rrk" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.298233 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85334287-4d7c-428a-bf1d-20f5511f442a-operator-scripts\") pod \"nova-cell0-733a-account-create-update-x8f26\" (UID: \"85334287-4d7c-428a-bf1d-20f5511f442a\") " pod="openstack/nova-cell0-733a-account-create-update-x8f26" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.298309 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djkf2\" (UniqueName: \"kubernetes.io/projected/73d5db70-d787-4d5b-9c2c-64859f2acf0c-kube-api-access-djkf2\") pod \"nova-cell1-db-create-czcg6\" (UID: \"73d5db70-d787-4d5b-9c2c-64859f2acf0c\") " pod="openstack/nova-cell1-db-create-czcg6" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.298333 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnk9f\" (UniqueName: \"kubernetes.io/projected/85334287-4d7c-428a-bf1d-20f5511f442a-kube-api-access-nnk9f\") pod \"nova-cell0-733a-account-create-update-x8f26\" (UID: \"85334287-4d7c-428a-bf1d-20f5511f442a\") " pod="openstack/nova-cell0-733a-account-create-update-x8f26" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.298360 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxkdf\" (UniqueName: \"kubernetes.io/projected/2a51d19b-f2d1-4164-bb39-6da466eb115c-kube-api-access-kxkdf\") pod \"nova-api-41b8-account-create-update-s8rrk\" (UID: \"2a51d19b-f2d1-4164-bb39-6da466eb115c\") " pod="openstack/nova-api-41b8-account-create-update-s8rrk" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.300154 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a51d19b-f2d1-4164-bb39-6da466eb115c-operator-scripts\") pod \"nova-api-41b8-account-create-update-s8rrk\" (UID: \"2a51d19b-f2d1-4164-bb39-6da466eb115c\") " pod="openstack/nova-api-41b8-account-create-update-s8rrk" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.302118 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-51f6-account-create-update-qdrbh"] Jan 28 15:23:40 crc kubenswrapper[4981]: E0128 15:23:40.304991 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1306cbc-c01f-4393-93b4-bd40a4165953" containerName="cinder-api-log" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.305022 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1306cbc-c01f-4393-93b4-bd40a4165953" containerName="cinder-api-log" Jan 28 15:23:40 crc kubenswrapper[4981]: E0128 15:23:40.305039 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1306cbc-c01f-4393-93b4-bd40a4165953" containerName="cinder-api" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.305048 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1306cbc-c01f-4393-93b4-bd40a4165953" containerName="cinder-api" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.306399 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1306cbc-c01f-4393-93b4-bd40a4165953" containerName="cinder-api-log" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.306438 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1306cbc-c01f-4393-93b4-bd40a4165953" containerName="cinder-api" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.307536 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-51f6-account-create-update-qdrbh" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.309434 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.321821 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-51f6-account-create-update-qdrbh"] Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.344333 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxkdf\" (UniqueName: \"kubernetes.io/projected/2a51d19b-f2d1-4164-bb39-6da466eb115c-kube-api-access-kxkdf\") pod \"nova-api-41b8-account-create-update-s8rrk\" (UID: \"2a51d19b-f2d1-4164-bb39-6da466eb115c\") " pod="openstack/nova-api-41b8-account-create-update-s8rrk" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.390779 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.9577767550000003 podStartE2EDuration="5.390756209s" podCreationTimestamp="2026-01-28 15:23:35 +0000 UTC" firstStartedPulling="2026-01-28 15:23:36.253449848 +0000 UTC m=+1227.705608089" lastFinishedPulling="2026-01-28 15:23:37.686429312 +0000 UTC m=+1229.138587543" observedRunningTime="2026-01-28 15:23:40.376877376 +0000 UTC m=+1231.829035637" watchObservedRunningTime="2026-01-28 15:23:40.390756209 +0000 UTC m=+1231.842914460" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.400128 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-config-data\") pod \"b1306cbc-c01f-4393-93b4-bd40a4165953\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.400239 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1306cbc-c01f-4393-93b4-bd40a4165953-logs\") pod \"b1306cbc-c01f-4393-93b4-bd40a4165953\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.400270 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-scripts\") pod \"b1306cbc-c01f-4393-93b4-bd40a4165953\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.400361 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-combined-ca-bundle\") pod \"b1306cbc-c01f-4393-93b4-bd40a4165953\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.400411 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1306cbc-c01f-4393-93b4-bd40a4165953-etc-machine-id\") pod \"b1306cbc-c01f-4393-93b4-bd40a4165953\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.400451 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-config-data-custom\") pod \"b1306cbc-c01f-4393-93b4-bd40a4165953\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.400537 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94hkv\" (UniqueName: \"kubernetes.io/projected/b1306cbc-c01f-4393-93b4-bd40a4165953-kube-api-access-94hkv\") pod \"b1306cbc-c01f-4393-93b4-bd40a4165953\" (UID: \"b1306cbc-c01f-4393-93b4-bd40a4165953\") " Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.400893 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djkf2\" (UniqueName: \"kubernetes.io/projected/73d5db70-d787-4d5b-9c2c-64859f2acf0c-kube-api-access-djkf2\") pod \"nova-cell1-db-create-czcg6\" (UID: \"73d5db70-d787-4d5b-9c2c-64859f2acf0c\") " pod="openstack/nova-cell1-db-create-czcg6" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.400936 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnk9f\" (UniqueName: \"kubernetes.io/projected/85334287-4d7c-428a-bf1d-20f5511f442a-kube-api-access-nnk9f\") pod \"nova-cell0-733a-account-create-update-x8f26\" (UID: \"85334287-4d7c-428a-bf1d-20f5511f442a\") " pod="openstack/nova-cell0-733a-account-create-update-x8f26" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.401032 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc6f6e3-3fae-4476-9ec3-db95f636ac09-operator-scripts\") pod \"nova-cell1-51f6-account-create-update-qdrbh\" (UID: \"9bc6f6e3-3fae-4476-9ec3-db95f636ac09\") " pod="openstack/nova-cell1-51f6-account-create-update-qdrbh" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.401079 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fbv8\" (UniqueName: \"kubernetes.io/projected/9bc6f6e3-3fae-4476-9ec3-db95f636ac09-kube-api-access-4fbv8\") pod \"nova-cell1-51f6-account-create-update-qdrbh\" (UID: \"9bc6f6e3-3fae-4476-9ec3-db95f636ac09\") " pod="openstack/nova-cell1-51f6-account-create-update-qdrbh" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.401113 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73d5db70-d787-4d5b-9c2c-64859f2acf0c-operator-scripts\") pod \"nova-cell1-db-create-czcg6\" (UID: \"73d5db70-d787-4d5b-9c2c-64859f2acf0c\") " pod="openstack/nova-cell1-db-create-czcg6" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.401215 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85334287-4d7c-428a-bf1d-20f5511f442a-operator-scripts\") pod \"nova-cell0-733a-account-create-update-x8f26\" (UID: \"85334287-4d7c-428a-bf1d-20f5511f442a\") " pod="openstack/nova-cell0-733a-account-create-update-x8f26" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.404748 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1306cbc-c01f-4393-93b4-bd40a4165953-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b1306cbc-c01f-4393-93b4-bd40a4165953" (UID: "b1306cbc-c01f-4393-93b4-bd40a4165953"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.405107 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85334287-4d7c-428a-bf1d-20f5511f442a-operator-scripts\") pod \"nova-cell0-733a-account-create-update-x8f26\" (UID: \"85334287-4d7c-428a-bf1d-20f5511f442a\") " pod="openstack/nova-cell0-733a-account-create-update-x8f26" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.405388 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1306cbc-c01f-4393-93b4-bd40a4165953-logs" (OuterVolumeSpecName: "logs") pod "b1306cbc-c01f-4393-93b4-bd40a4165953" (UID: "b1306cbc-c01f-4393-93b4-bd40a4165953"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.407874 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73d5db70-d787-4d5b-9c2c-64859f2acf0c-operator-scripts\") pod \"nova-cell1-db-create-czcg6\" (UID: \"73d5db70-d787-4d5b-9c2c-64859f2acf0c\") " pod="openstack/nova-cell1-db-create-czcg6" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.412301 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1306cbc-c01f-4393-93b4-bd40a4165953-kube-api-access-94hkv" (OuterVolumeSpecName: "kube-api-access-94hkv") pod "b1306cbc-c01f-4393-93b4-bd40a4165953" (UID: "b1306cbc-c01f-4393-93b4-bd40a4165953"). InnerVolumeSpecName "kube-api-access-94hkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.412601 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-scripts" (OuterVolumeSpecName: "scripts") pod "b1306cbc-c01f-4393-93b4-bd40a4165953" (UID: "b1306cbc-c01f-4393-93b4-bd40a4165953"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.422687 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b1306cbc-c01f-4393-93b4-bd40a4165953" (UID: "b1306cbc-c01f-4393-93b4-bd40a4165953"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.433997 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djkf2\" (UniqueName: \"kubernetes.io/projected/73d5db70-d787-4d5b-9c2c-64859f2acf0c-kube-api-access-djkf2\") pod \"nova-cell1-db-create-czcg6\" (UID: \"73d5db70-d787-4d5b-9c2c-64859f2acf0c\") " pod="openstack/nova-cell1-db-create-czcg6" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.445893 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnk9f\" (UniqueName: \"kubernetes.io/projected/85334287-4d7c-428a-bf1d-20f5511f442a-kube-api-access-nnk9f\") pod \"nova-cell0-733a-account-create-update-x8f26\" (UID: \"85334287-4d7c-428a-bf1d-20f5511f442a\") " pod="openstack/nova-cell0-733a-account-create-update-x8f26" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.466000 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-czcg6" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.469690 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1306cbc-c01f-4393-93b4-bd40a4165953" (UID: "b1306cbc-c01f-4393-93b4-bd40a4165953"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.489651 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6vfxh" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.505362 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc6f6e3-3fae-4476-9ec3-db95f636ac09-operator-scripts\") pod \"nova-cell1-51f6-account-create-update-qdrbh\" (UID: \"9bc6f6e3-3fae-4476-9ec3-db95f636ac09\") " pod="openstack/nova-cell1-51f6-account-create-update-qdrbh" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.505402 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fbv8\" (UniqueName: \"kubernetes.io/projected/9bc6f6e3-3fae-4476-9ec3-db95f636ac09-kube-api-access-4fbv8\") pod \"nova-cell1-51f6-account-create-update-qdrbh\" (UID: \"9bc6f6e3-3fae-4476-9ec3-db95f636ac09\") " pod="openstack/nova-cell1-51f6-account-create-update-qdrbh" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.505941 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1306cbc-c01f-4393-93b4-bd40a4165953-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.505958 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.505966 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.505976 4981 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b1306cbc-c01f-4393-93b4-bd40a4165953-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.505984 4981 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.505992 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94hkv\" (UniqueName: \"kubernetes.io/projected/b1306cbc-c01f-4393-93b4-bd40a4165953-kube-api-access-94hkv\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.514612 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-41b8-account-create-update-s8rrk" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.515730 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc6f6e3-3fae-4476-9ec3-db95f636ac09-operator-scripts\") pod \"nova-cell1-51f6-account-create-update-qdrbh\" (UID: \"9bc6f6e3-3fae-4476-9ec3-db95f636ac09\") " pod="openstack/nova-cell1-51f6-account-create-update-qdrbh" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.525788 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-733a-account-create-update-x8f26" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.535737 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fbv8\" (UniqueName: \"kubernetes.io/projected/9bc6f6e3-3fae-4476-9ec3-db95f636ac09-kube-api-access-4fbv8\") pod \"nova-cell1-51f6-account-create-update-qdrbh\" (UID: \"9bc6f6e3-3fae-4476-9ec3-db95f636ac09\") " pod="openstack/nova-cell1-51f6-account-create-update-qdrbh" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.555487 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-51f6-account-create-update-qdrbh" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.562574 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-config-data" (OuterVolumeSpecName: "config-data") pod "b1306cbc-c01f-4393-93b4-bd40a4165953" (UID: "b1306cbc-c01f-4393-93b4-bd40a4165953"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.587872 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.614732 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1306cbc-c01f-4393-93b4-bd40a4165953-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.623530 4981 scope.go:117] "RemoveContainer" containerID="48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.666262 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.712334 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.720905 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.722616 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.729013 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.730640 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.730795 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.796261 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.823828 4981 scope.go:117] "RemoveContainer" containerID="7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2" Jan 28 15:23:40 crc kubenswrapper[4981]: E0128 15:23:40.824751 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2\": container with ID starting with 7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2 not found: ID does not exist" containerID="7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.824869 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2"} err="failed to get container status \"7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2\": rpc error: code = NotFound desc = could not find container \"7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2\": container with ID starting with 7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2 not found: ID does not exist" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.824959 4981 scope.go:117] "RemoveContainer" containerID="48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.825075 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.825141 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.825168 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/934e0f8e-1579-4d0e-a34a-53d266c4612a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.825229 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-config-data-custom\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.825261 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-scripts\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.825387 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-config-data\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.825470 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.825713 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wtp4\" (UniqueName: \"kubernetes.io/projected/934e0f8e-1579-4d0e-a34a-53d266c4612a-kube-api-access-4wtp4\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.825749 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/934e0f8e-1579-4d0e-a34a-53d266c4612a-logs\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: E0128 15:23:40.825877 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c\": container with ID starting with 48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c not found: ID does not exist" containerID="48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.826022 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c"} err="failed to get container status \"48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c\": rpc error: code = NotFound desc = could not find container \"48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c\": container with ID starting with 48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c not found: ID does not exist" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.826049 4981 scope.go:117] "RemoveContainer" containerID="7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.829628 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2"} err="failed to get container status \"7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2\": rpc error: code = NotFound desc = could not find container \"7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2\": container with ID starting with 7c09d3911bc5a23879cc39d42d793f491217a92eceb38bc0ed94eabd6249b2a2 not found: ID does not exist" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.829679 4981 scope.go:117] "RemoveContainer" containerID="48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.830615 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c"} err="failed to get container status \"48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c\": rpc error: code = NotFound desc = could not find container \"48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c\": container with ID starting with 48b428b6ab349f99b264a96307829f8ee6ebe9ba7f57da4b08a3d47cc28ec97c not found: ID does not exist" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.930094 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-scripts\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.930671 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-config-data\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.930727 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.930782 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wtp4\" (UniqueName: \"kubernetes.io/projected/934e0f8e-1579-4d0e-a34a-53d266c4612a-kube-api-access-4wtp4\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.930806 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/934e0f8e-1579-4d0e-a34a-53d266c4612a-logs\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.930910 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.930947 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.930969 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/934e0f8e-1579-4d0e-a34a-53d266c4612a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.930989 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-config-data-custom\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.935135 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/934e0f8e-1579-4d0e-a34a-53d266c4612a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.935513 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/934e0f8e-1579-4d0e-a34a-53d266c4612a-logs\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.945282 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-scripts\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.946073 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.946805 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.954913 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.960436 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-config-data\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.971042 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wtp4\" (UniqueName: \"kubernetes.io/projected/934e0f8e-1579-4d0e-a34a-53d266c4612a-kube-api-access-4wtp4\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.987504 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934e0f8e-1579-4d0e-a34a-53d266c4612a-config-data-custom\") pod \"cinder-api-0\" (UID: \"934e0f8e-1579-4d0e-a34a-53d266c4612a\") " pod="openstack/cinder-api-0" Jan 28 15:23:40 crc kubenswrapper[4981]: I0128 15:23:40.992036 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lkbq5"] Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.052355 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.103168 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.133923 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-dns-svc\") pod \"30426464-1d4b-4ac9-86c1-7d4e458000ba\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.133993 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-ovsdbserver-sb\") pod \"30426464-1d4b-4ac9-86c1-7d4e458000ba\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.134045 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-dns-swift-storage-0\") pod \"30426464-1d4b-4ac9-86c1-7d4e458000ba\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.134067 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kfpg\" (UniqueName: \"kubernetes.io/projected/30426464-1d4b-4ac9-86c1-7d4e458000ba-kube-api-access-9kfpg\") pod \"30426464-1d4b-4ac9-86c1-7d4e458000ba\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.134148 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-config\") pod \"30426464-1d4b-4ac9-86c1-7d4e458000ba\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.134176 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-ovsdbserver-nb\") pod \"30426464-1d4b-4ac9-86c1-7d4e458000ba\" (UID: \"30426464-1d4b-4ac9-86c1-7d4e458000ba\") " Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.140594 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30426464-1d4b-4ac9-86c1-7d4e458000ba-kube-api-access-9kfpg" (OuterVolumeSpecName: "kube-api-access-9kfpg") pod "30426464-1d4b-4ac9-86c1-7d4e458000ba" (UID: "30426464-1d4b-4ac9-86c1-7d4e458000ba"). InnerVolumeSpecName "kube-api-access-9kfpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.237622 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kfpg\" (UniqueName: \"kubernetes.io/projected/30426464-1d4b-4ac9-86c1-7d4e458000ba-kube-api-access-9kfpg\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.247632 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "30426464-1d4b-4ac9-86c1-7d4e458000ba" (UID: "30426464-1d4b-4ac9-86c1-7d4e458000ba"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.263633 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "30426464-1d4b-4ac9-86c1-7d4e458000ba" (UID: "30426464-1d4b-4ac9-86c1-7d4e458000ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.268673 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-config" (OuterVolumeSpecName: "config") pod "30426464-1d4b-4ac9-86c1-7d4e458000ba" (UID: "30426464-1d4b-4ac9-86c1-7d4e458000ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.274521 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-czcg6"] Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.290589 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "30426464-1d4b-4ac9-86c1-7d4e458000ba" (UID: "30426464-1d4b-4ac9-86c1-7d4e458000ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.331786 4981 generic.go:334] "Generic (PLEG): container finished" podID="30426464-1d4b-4ac9-86c1-7d4e458000ba" containerID="d68239935d044773a1dd1ede1a7e92cb9900e54235df046fc9895068c8d81865" exitCode=0 Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.331944 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.350665 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.350712 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.350723 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.350735 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.373941 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "30426464-1d4b-4ac9-86c1-7d4e458000ba" (UID: "30426464-1d4b-4ac9-86c1-7d4e458000ba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.389094 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-67cc6fc44d-7stvd" podStartSLOduration=3.38906538 podStartE2EDuration="3.38906538s" podCreationTimestamp="2026-01-28 15:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:41.371848809 +0000 UTC m=+1232.824007070" watchObservedRunningTime="2026-01-28 15:23:41.38906538 +0000 UTC m=+1232.841223631" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.404353 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1306cbc-c01f-4393-93b4-bd40a4165953" path="/var/lib/kubelet/pods/b1306cbc-c01f-4393-93b4-bd40a4165953/volumes" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.409471 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.409577 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc6fc44d-7stvd" event={"ID":"bd5d9602-d2bd-4dfd-9249-41f61260b5eb","Type":"ContainerStarted","Data":"2e39bb5187129b17753782b936e62c27167ad87bbde6468e0fa65e1fa4917249"} Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.409646 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" event={"ID":"331416fc-f914-4ed7-8326-ff3db72c5246","Type":"ContainerStarted","Data":"dfd0a6027c0c8493ead9b4bc432c8ecb5b485f48ea78962607f507e39418f231"} Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.409703 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.409721 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" event={"ID":"30426464-1d4b-4ac9-86c1-7d4e458000ba","Type":"ContainerDied","Data":"d68239935d044773a1dd1ede1a7e92cb9900e54235df046fc9895068c8d81865"} Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.409785 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-733a-account-create-update-x8f26"] Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.410227 4981 scope.go:117] "RemoveContainer" containerID="d68239935d044773a1dd1ede1a7e92cb9900e54235df046fc9895068c8d81865" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.410502 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69c986f6d7-n4p6d" event={"ID":"30426464-1d4b-4ac9-86c1-7d4e458000ba","Type":"ContainerDied","Data":"dc75d6b66c1047511183f1191e758489ee1cc8c7695f81e97b0b1def93750068"} Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.410626 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.410661 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.410675 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lkbq5" event={"ID":"6d4a209b-c995-42fa-9a1c-82a1c9c60e91","Type":"ContainerStarted","Data":"352dd1106f74b7d2ecb0a634d76ef54ec4b2ff4ebe877c58fd80692384d75af2"} Jan 28 15:23:41 crc kubenswrapper[4981]: W0128 15:23:41.415420 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85334287_4d7c_428a_bf1d_20f5511f442a.slice/crio-2e2bdd12c629b6b7d82434a3b64e7adb0781c0bd852fce798025d458e1ad9705 WatchSource:0}: Error finding container 2e2bdd12c629b6b7d82434a3b64e7adb0781c0bd852fce798025d458e1ad9705: Status 404 returned error can't find the container with id 2e2bdd12c629b6b7d82434a3b64e7adb0781c0bd852fce798025d458e1ad9705 Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.416780 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" podStartSLOduration=3.416764624 podStartE2EDuration="3.416764624s" podCreationTimestamp="2026-01-28 15:23:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:41.416719933 +0000 UTC m=+1232.868878164" watchObservedRunningTime="2026-01-28 15:23:41.416764624 +0000 UTC m=+1232.868922865" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.453984 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.455297 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/30426464-1d4b-4ac9-86c1-7d4e458000ba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.481000 4981 scope.go:117] "RemoveContainer" containerID="8f27aaa24d3c338124a0f9d903db2ce708e43235cc727f3a4f6124bd25e82b26" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.534033 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.548240 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-41b8-account-create-update-s8rrk"] Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.582246 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6vfxh"] Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.585101 4981 scope.go:117] "RemoveContainer" containerID="d68239935d044773a1dd1ede1a7e92cb9900e54235df046fc9895068c8d81865" Jan 28 15:23:41 crc kubenswrapper[4981]: E0128 15:23:41.586943 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d68239935d044773a1dd1ede1a7e92cb9900e54235df046fc9895068c8d81865\": container with ID starting with d68239935d044773a1dd1ede1a7e92cb9900e54235df046fc9895068c8d81865 not found: ID does not exist" containerID="d68239935d044773a1dd1ede1a7e92cb9900e54235df046fc9895068c8d81865" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.586997 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d68239935d044773a1dd1ede1a7e92cb9900e54235df046fc9895068c8d81865"} err="failed to get container status \"d68239935d044773a1dd1ede1a7e92cb9900e54235df046fc9895068c8d81865\": rpc error: code = NotFound desc = could not find container \"d68239935d044773a1dd1ede1a7e92cb9900e54235df046fc9895068c8d81865\": container with ID starting with d68239935d044773a1dd1ede1a7e92cb9900e54235df046fc9895068c8d81865 not found: ID does not exist" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.587018 4981 scope.go:117] "RemoveContainer" containerID="8f27aaa24d3c338124a0f9d903db2ce708e43235cc727f3a4f6124bd25e82b26" Jan 28 15:23:41 crc kubenswrapper[4981]: E0128 15:23:41.587605 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f27aaa24d3c338124a0f9d903db2ce708e43235cc727f3a4f6124bd25e82b26\": container with ID starting with 8f27aaa24d3c338124a0f9d903db2ce708e43235cc727f3a4f6124bd25e82b26 not found: ID does not exist" containerID="8f27aaa24d3c338124a0f9d903db2ce708e43235cc727f3a4f6124bd25e82b26" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.587623 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f27aaa24d3c338124a0f9d903db2ce708e43235cc727f3a4f6124bd25e82b26"} err="failed to get container status \"8f27aaa24d3c338124a0f9d903db2ce708e43235cc727f3a4f6124bd25e82b26\": rpc error: code = NotFound desc = could not find container \"8f27aaa24d3c338124a0f9d903db2ce708e43235cc727f3a4f6124bd25e82b26\": container with ID starting with 8f27aaa24d3c338124a0f9d903db2ce708e43235cc727f3a4f6124bd25e82b26 not found: ID does not exist" Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.597082 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-51f6-account-create-update-qdrbh"] Jan 28 15:23:41 crc kubenswrapper[4981]: W0128 15:23:41.598892 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a51d19b_f2d1_4164_bb39_6da466eb115c.slice/crio-8837b208c1aed7bc3278da32da45f86cd5fc342990e9dd78782c2214c6eef48f WatchSource:0}: Error finding container 8837b208c1aed7bc3278da32da45f86cd5fc342990e9dd78782c2214c6eef48f: Status 404 returned error can't find the container with id 8837b208c1aed7bc3278da32da45f86cd5fc342990e9dd78782c2214c6eef48f Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.679245 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-n4p6d"] Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.726523 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69c986f6d7-n4p6d"] Jan 28 15:23:41 crc kubenswrapper[4981]: I0128 15:23:41.876866 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 15:23:41 crc kubenswrapper[4981]: W0128 15:23:41.952890 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod934e0f8e_1579_4d0e_a34a_53d266c4612a.slice/crio-e42b4c1319adc470add59221e3c722f0e025c1c386ca3954b7c926d42035ba80 WatchSource:0}: Error finding container e42b4c1319adc470add59221e3c722f0e025c1c386ca3954b7c926d42035ba80: Status 404 returned error can't find the container with id e42b4c1319adc470add59221e3c722f0e025c1c386ca3954b7c926d42035ba80 Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.160139 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-787f455647-ngpww"] Jan 28 15:23:42 crc kubenswrapper[4981]: E0128 15:23:42.161978 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30426464-1d4b-4ac9-86c1-7d4e458000ba" containerName="init" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.162001 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="30426464-1d4b-4ac9-86c1-7d4e458000ba" containerName="init" Jan 28 15:23:42 crc kubenswrapper[4981]: E0128 15:23:42.162031 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30426464-1d4b-4ac9-86c1-7d4e458000ba" containerName="dnsmasq-dns" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.162038 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="30426464-1d4b-4ac9-86c1-7d4e458000ba" containerName="dnsmasq-dns" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.162239 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="30426464-1d4b-4ac9-86c1-7d4e458000ba" containerName="dnsmasq-dns" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.169520 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.173841 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-787f455647-ngpww"] Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.175020 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.175200 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.285575 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-config\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.286037 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-combined-ca-bundle\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.286092 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-ovndb-tls-certs\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.286114 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-internal-tls-certs\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.286134 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-httpd-config\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.286239 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzflk\" (UniqueName: \"kubernetes.io/projected/b73f2a77-d6ea-418e-93a0-9d5a928637eb-kube-api-access-jzflk\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.286277 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-public-tls-certs\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.388624 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzflk\" (UniqueName: \"kubernetes.io/projected/b73f2a77-d6ea-418e-93a0-9d5a928637eb-kube-api-access-jzflk\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.388741 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-public-tls-certs\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.388808 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-config\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.388880 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-combined-ca-bundle\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.388976 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-ovndb-tls-certs\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.389005 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-internal-tls-certs\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.389064 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-httpd-config\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.400482 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-internal-tls-certs\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.404849 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-combined-ca-bundle\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.408963 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-config\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.411594 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-httpd-config\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.417840 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-public-tls-certs\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.424431 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzflk\" (UniqueName: \"kubernetes.io/projected/b73f2a77-d6ea-418e-93a0-9d5a928637eb-kube-api-access-jzflk\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.431374 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b73f2a77-d6ea-418e-93a0-9d5a928637eb-ovndb-tls-certs\") pod \"neutron-787f455647-ngpww\" (UID: \"b73f2a77-d6ea-418e-93a0-9d5a928637eb\") " pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.443181 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6vfxh" event={"ID":"1994b16c-0ff3-4534-be1a-fcc718dd6eed","Type":"ContainerStarted","Data":"602e647e355840485038d99df186edb1cd36a2b8fc2933d61b3ea82c234665df"} Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.443237 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6vfxh" event={"ID":"1994b16c-0ff3-4534-be1a-fcc718dd6eed","Type":"ContainerStarted","Data":"2b0d6389c245f87657711321f3572f4922e978990f650038c03f5482dab30425"} Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.467558 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-51f6-account-create-update-qdrbh" event={"ID":"9bc6f6e3-3fae-4476-9ec3-db95f636ac09","Type":"ContainerStarted","Data":"4bdb329cede3dd3c6dfb1944d29cf6db0d8eb4bed1234c78148c65c225a843fc"} Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.467609 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-51f6-account-create-update-qdrbh" event={"ID":"9bc6f6e3-3fae-4476-9ec3-db95f636ac09","Type":"ContainerStarted","Data":"49c69ec7592828f69d9a9283f00b8f10c6560054c9c065a3ea1b58f260e7a241"} Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.485965 4981 generic.go:334] "Generic (PLEG): container finished" podID="73d5db70-d787-4d5b-9c2c-64859f2acf0c" containerID="f577551ae3f69b50a3f1f480a541a3283df0b2c22a53555560b2e5509f4d53ce" exitCode=0 Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.486070 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-czcg6" event={"ID":"73d5db70-d787-4d5b-9c2c-64859f2acf0c","Type":"ContainerDied","Data":"f577551ae3f69b50a3f1f480a541a3283df0b2c22a53555560b2e5509f4d53ce"} Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.486100 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-czcg6" event={"ID":"73d5db70-d787-4d5b-9c2c-64859f2acf0c","Type":"ContainerStarted","Data":"ec312deade57a2616cea0230c81ee52a4fef791bd3eab5ba0dda85239b476e17"} Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.498385 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-6vfxh" podStartSLOduration=3.498363412 podStartE2EDuration="3.498363412s" podCreationTimestamp="2026-01-28 15:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:42.465121153 +0000 UTC m=+1233.917279384" watchObservedRunningTime="2026-01-28 15:23:42.498363412 +0000 UTC m=+1233.950521653" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.503615 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.509069 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"934e0f8e-1579-4d0e-a34a-53d266c4612a","Type":"ContainerStarted","Data":"e42b4c1319adc470add59221e3c722f0e025c1c386ca3954b7c926d42035ba80"} Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.526495 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-733a-account-create-update-x8f26" event={"ID":"85334287-4d7c-428a-bf1d-20f5511f442a","Type":"ContainerStarted","Data":"83ba1d5df8289aca43300f7d9127640f14033dff6e141234cf72734923d078db"} Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.526584 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-733a-account-create-update-x8f26" event={"ID":"85334287-4d7c-428a-bf1d-20f5511f442a","Type":"ContainerStarted","Data":"2e2bdd12c629b6b7d82434a3b64e7adb0781c0bd852fce798025d458e1ad9705"} Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.531026 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lkbq5" event={"ID":"6d4a209b-c995-42fa-9a1c-82a1c9c60e91","Type":"ContainerStarted","Data":"7a8015ad5efc7040ba91b0cc760c39bcbbc6210c43204ce4f08717c582641cd3"} Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.560514 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-51f6-account-create-update-qdrbh" podStartSLOduration=2.560496747 podStartE2EDuration="2.560496747s" podCreationTimestamp="2026-01-28 15:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:42.521500567 +0000 UTC m=+1233.973658808" watchObservedRunningTime="2026-01-28 15:23:42.560496747 +0000 UTC m=+1234.012654988" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.565271 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-41b8-account-create-update-s8rrk" event={"ID":"2a51d19b-f2d1-4164-bb39-6da466eb115c","Type":"ContainerStarted","Data":"c8270963b0feaf7876dd1797cd835f1a9ffafa2375b5a36457463111738cf5be"} Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.566073 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-41b8-account-create-update-s8rrk" event={"ID":"2a51d19b-f2d1-4164-bb39-6da466eb115c","Type":"ContainerStarted","Data":"8837b208c1aed7bc3278da32da45f86cd5fc342990e9dd78782c2214c6eef48f"} Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.573349 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.573526 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.619291 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-733a-account-create-update-x8f26" podStartSLOduration=2.619272554 podStartE2EDuration="2.619272554s" podCreationTimestamp="2026-01-28 15:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:42.593382687 +0000 UTC m=+1234.045540938" watchObservedRunningTime="2026-01-28 15:23:42.619272554 +0000 UTC m=+1234.071430795" Jan 28 15:23:42 crc kubenswrapper[4981]: I0128 15:23:42.627619 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-41b8-account-create-update-s8rrk" podStartSLOduration=3.627541991 podStartE2EDuration="3.627541991s" podCreationTimestamp="2026-01-28 15:23:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:42.624178333 +0000 UTC m=+1234.076336574" watchObservedRunningTime="2026-01-28 15:23:42.627541991 +0000 UTC m=+1234.079700232" Jan 28 15:23:43 crc kubenswrapper[4981]: I0128 15:23:43.125583 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-787f455647-ngpww"] Jan 28 15:23:43 crc kubenswrapper[4981]: W0128 15:23:43.160723 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb73f2a77_d6ea_418e_93a0_9d5a928637eb.slice/crio-648da65e68579b6f8b52c7d612122f2ddaf61ab1b618a249073f39758e99b680 WatchSource:0}: Error finding container 648da65e68579b6f8b52c7d612122f2ddaf61ab1b618a249073f39758e99b680: Status 404 returned error can't find the container with id 648da65e68579b6f8b52c7d612122f2ddaf61ab1b618a249073f39758e99b680 Jan 28 15:23:43 crc kubenswrapper[4981]: I0128 15:23:43.331697 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30426464-1d4b-4ac9-86c1-7d4e458000ba" path="/var/lib/kubelet/pods/30426464-1d4b-4ac9-86c1-7d4e458000ba/volumes" Jan 28 15:23:43 crc kubenswrapper[4981]: I0128 15:23:43.579681 4981 generic.go:334] "Generic (PLEG): container finished" podID="2a51d19b-f2d1-4164-bb39-6da466eb115c" containerID="c8270963b0feaf7876dd1797cd835f1a9ffafa2375b5a36457463111738cf5be" exitCode=0 Jan 28 15:23:43 crc kubenswrapper[4981]: I0128 15:23:43.579794 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-41b8-account-create-update-s8rrk" event={"ID":"2a51d19b-f2d1-4164-bb39-6da466eb115c","Type":"ContainerDied","Data":"c8270963b0feaf7876dd1797cd835f1a9ffafa2375b5a36457463111738cf5be"} Jan 28 15:23:43 crc kubenswrapper[4981]: I0128 15:23:43.585651 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-787f455647-ngpww" event={"ID":"b73f2a77-d6ea-418e-93a0-9d5a928637eb","Type":"ContainerStarted","Data":"fccad157c4a352f369a5ae19faf0f9df09451ed008010335b83d2a6ee0935f8e"} Jan 28 15:23:43 crc kubenswrapper[4981]: I0128 15:23:43.585682 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-787f455647-ngpww" event={"ID":"b73f2a77-d6ea-418e-93a0-9d5a928637eb","Type":"ContainerStarted","Data":"648da65e68579b6f8b52c7d612122f2ddaf61ab1b618a249073f39758e99b680"} Jan 28 15:23:43 crc kubenswrapper[4981]: I0128 15:23:43.587242 4981 generic.go:334] "Generic (PLEG): container finished" podID="1994b16c-0ff3-4534-be1a-fcc718dd6eed" containerID="602e647e355840485038d99df186edb1cd36a2b8fc2933d61b3ea82c234665df" exitCode=0 Jan 28 15:23:43 crc kubenswrapper[4981]: I0128 15:23:43.587296 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6vfxh" event={"ID":"1994b16c-0ff3-4534-be1a-fcc718dd6eed","Type":"ContainerDied","Data":"602e647e355840485038d99df186edb1cd36a2b8fc2933d61b3ea82c234665df"} Jan 28 15:23:43 crc kubenswrapper[4981]: I0128 15:23:43.592422 4981 generic.go:334] "Generic (PLEG): container finished" podID="9bc6f6e3-3fae-4476-9ec3-db95f636ac09" containerID="4bdb329cede3dd3c6dfb1944d29cf6db0d8eb4bed1234c78148c65c225a843fc" exitCode=0 Jan 28 15:23:43 crc kubenswrapper[4981]: I0128 15:23:43.592570 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-51f6-account-create-update-qdrbh" event={"ID":"9bc6f6e3-3fae-4476-9ec3-db95f636ac09","Type":"ContainerDied","Data":"4bdb329cede3dd3c6dfb1944d29cf6db0d8eb4bed1234c78148c65c225a843fc"} Jan 28 15:23:43 crc kubenswrapper[4981]: I0128 15:23:43.598016 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"934e0f8e-1579-4d0e-a34a-53d266c4612a","Type":"ContainerStarted","Data":"2c0fb2751ddbe219a9cb178e2792da16240491adb57d8d6b1530c709d0a785c2"} Jan 28 15:23:43 crc kubenswrapper[4981]: I0128 15:23:43.600861 4981 generic.go:334] "Generic (PLEG): container finished" podID="85334287-4d7c-428a-bf1d-20f5511f442a" containerID="83ba1d5df8289aca43300f7d9127640f14033dff6e141234cf72734923d078db" exitCode=0 Jan 28 15:23:43 crc kubenswrapper[4981]: I0128 15:23:43.600924 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-733a-account-create-update-x8f26" event={"ID":"85334287-4d7c-428a-bf1d-20f5511f442a","Type":"ContainerDied","Data":"83ba1d5df8289aca43300f7d9127640f14033dff6e141234cf72734923d078db"} Jan 28 15:23:43 crc kubenswrapper[4981]: I0128 15:23:43.606572 4981 generic.go:334] "Generic (PLEG): container finished" podID="6d4a209b-c995-42fa-9a1c-82a1c9c60e91" containerID="7a8015ad5efc7040ba91b0cc760c39bcbbc6210c43204ce4f08717c582641cd3" exitCode=0 Jan 28 15:23:43 crc kubenswrapper[4981]: I0128 15:23:43.607597 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lkbq5" event={"ID":"6d4a209b-c995-42fa-9a1c-82a1c9c60e91","Type":"ContainerDied","Data":"7a8015ad5efc7040ba91b0cc760c39bcbbc6210c43204ce4f08717c582641cd3"} Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.184269 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-czcg6" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.191539 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lkbq5" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.348639 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73d5db70-d787-4d5b-9c2c-64859f2acf0c-operator-scripts\") pod \"73d5db70-d787-4d5b-9c2c-64859f2acf0c\" (UID: \"73d5db70-d787-4d5b-9c2c-64859f2acf0c\") " Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.349145 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d8xm\" (UniqueName: \"kubernetes.io/projected/6d4a209b-c995-42fa-9a1c-82a1c9c60e91-kube-api-access-2d8xm\") pod \"6d4a209b-c995-42fa-9a1c-82a1c9c60e91\" (UID: \"6d4a209b-c995-42fa-9a1c-82a1c9c60e91\") " Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.349226 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4a209b-c995-42fa-9a1c-82a1c9c60e91-operator-scripts\") pod \"6d4a209b-c995-42fa-9a1c-82a1c9c60e91\" (UID: \"6d4a209b-c995-42fa-9a1c-82a1c9c60e91\") " Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.349294 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djkf2\" (UniqueName: \"kubernetes.io/projected/73d5db70-d787-4d5b-9c2c-64859f2acf0c-kube-api-access-djkf2\") pod \"73d5db70-d787-4d5b-9c2c-64859f2acf0c\" (UID: \"73d5db70-d787-4d5b-9c2c-64859f2acf0c\") " Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.349322 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73d5db70-d787-4d5b-9c2c-64859f2acf0c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "73d5db70-d787-4d5b-9c2c-64859f2acf0c" (UID: "73d5db70-d787-4d5b-9c2c-64859f2acf0c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.349638 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d4a209b-c995-42fa-9a1c-82a1c9c60e91-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d4a209b-c995-42fa-9a1c-82a1c9c60e91" (UID: "6d4a209b-c995-42fa-9a1c-82a1c9c60e91"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.350660 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/73d5db70-d787-4d5b-9c2c-64859f2acf0c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.350713 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d4a209b-c995-42fa-9a1c-82a1c9c60e91-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.355809 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73d5db70-d787-4d5b-9c2c-64859f2acf0c-kube-api-access-djkf2" (OuterVolumeSpecName: "kube-api-access-djkf2") pod "73d5db70-d787-4d5b-9c2c-64859f2acf0c" (UID: "73d5db70-d787-4d5b-9c2c-64859f2acf0c"). InnerVolumeSpecName "kube-api-access-djkf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.356914 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d4a209b-c995-42fa-9a1c-82a1c9c60e91-kube-api-access-2d8xm" (OuterVolumeSpecName: "kube-api-access-2d8xm") pod "6d4a209b-c995-42fa-9a1c-82a1c9c60e91" (UID: "6d4a209b-c995-42fa-9a1c-82a1c9c60e91"). InnerVolumeSpecName "kube-api-access-2d8xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.363976 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.364071 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.433015 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.445251 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.458744 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djkf2\" (UniqueName: \"kubernetes.io/projected/73d5db70-d787-4d5b-9c2c-64859f2acf0c-kube-api-access-djkf2\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.458776 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d8xm\" (UniqueName: \"kubernetes.io/projected/6d4a209b-c995-42fa-9a1c-82a1c9c60e91-kube-api-access-2d8xm\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.619121 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-czcg6" event={"ID":"73d5db70-d787-4d5b-9c2c-64859f2acf0c","Type":"ContainerDied","Data":"ec312deade57a2616cea0230c81ee52a4fef791bd3eab5ba0dda85239b476e17"} Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.619165 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec312deade57a2616cea0230c81ee52a4fef791bd3eab5ba0dda85239b476e17" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.619251 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-czcg6" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.622805 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"934e0f8e-1579-4d0e-a34a-53d266c4612a","Type":"ContainerStarted","Data":"75ac5911248b0e735fe856e173f3370ff39150d60c2abc036c234c1d3a060f95"} Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.624374 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.629695 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lkbq5" event={"ID":"6d4a209b-c995-42fa-9a1c-82a1c9c60e91","Type":"ContainerDied","Data":"352dd1106f74b7d2ecb0a634d76ef54ec4b2ff4ebe877c58fd80692384d75af2"} Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.629783 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="352dd1106f74b7d2ecb0a634d76ef54ec4b2ff4ebe877c58fd80692384d75af2" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.629711 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lkbq5" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.636146 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-787f455647-ngpww" event={"ID":"b73f2a77-d6ea-418e-93a0-9d5a928637eb","Type":"ContainerStarted","Data":"472ffcb48de5cc28f5b39f553a44cd0bafac61f0e25d22897e64b4a93478a720"} Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.637052 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.637085 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.680426 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.680408793 podStartE2EDuration="4.680408793s" podCreationTimestamp="2026-01-28 15:23:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:44.656955239 +0000 UTC m=+1236.109113490" watchObservedRunningTime="2026-01-28 15:23:44.680408793 +0000 UTC m=+1236.132567044" Jan 28 15:23:44 crc kubenswrapper[4981]: I0128 15:23:44.690053 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-787f455647-ngpww" podStartSLOduration=2.690028884 podStartE2EDuration="2.690028884s" podCreationTimestamp="2026-01-28 15:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:44.67839538 +0000 UTC m=+1236.130553621" watchObservedRunningTime="2026-01-28 15:23:44.690028884 +0000 UTC m=+1236.142187125" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.219879 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-51f6-account-create-update-qdrbh" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.311173 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.311259 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.385861 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc6f6e3-3fae-4476-9ec3-db95f636ac09-operator-scripts\") pod \"9bc6f6e3-3fae-4476-9ec3-db95f636ac09\" (UID: \"9bc6f6e3-3fae-4476-9ec3-db95f636ac09\") " Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.385947 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fbv8\" (UniqueName: \"kubernetes.io/projected/9bc6f6e3-3fae-4476-9ec3-db95f636ac09-kube-api-access-4fbv8\") pod \"9bc6f6e3-3fae-4476-9ec3-db95f636ac09\" (UID: \"9bc6f6e3-3fae-4476-9ec3-db95f636ac09\") " Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.394323 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bc6f6e3-3fae-4476-9ec3-db95f636ac09-kube-api-access-4fbv8" (OuterVolumeSpecName: "kube-api-access-4fbv8") pod "9bc6f6e3-3fae-4476-9ec3-db95f636ac09" (UID: "9bc6f6e3-3fae-4476-9ec3-db95f636ac09"). InnerVolumeSpecName "kube-api-access-4fbv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.400244 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bc6f6e3-3fae-4476-9ec3-db95f636ac09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9bc6f6e3-3fae-4476-9ec3-db95f636ac09" (UID: "9bc6f6e3-3fae-4476-9ec3-db95f636ac09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.493614 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9bc6f6e3-3fae-4476-9ec3-db95f636ac09-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.493667 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fbv8\" (UniqueName: \"kubernetes.io/projected/9bc6f6e3-3fae-4476-9ec3-db95f636ac09-kube-api-access-4fbv8\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.539056 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6vfxh" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.553474 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-733a-account-create-update-x8f26" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.571542 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-41b8-account-create-update-s8rrk" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.594958 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnk9f\" (UniqueName: \"kubernetes.io/projected/85334287-4d7c-428a-bf1d-20f5511f442a-kube-api-access-nnk9f\") pod \"85334287-4d7c-428a-bf1d-20f5511f442a\" (UID: \"85334287-4d7c-428a-bf1d-20f5511f442a\") " Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.595042 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1994b16c-0ff3-4534-be1a-fcc718dd6eed-operator-scripts\") pod \"1994b16c-0ff3-4534-be1a-fcc718dd6eed\" (UID: \"1994b16c-0ff3-4534-be1a-fcc718dd6eed\") " Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.595062 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxkdf\" (UniqueName: \"kubernetes.io/projected/2a51d19b-f2d1-4164-bb39-6da466eb115c-kube-api-access-kxkdf\") pod \"2a51d19b-f2d1-4164-bb39-6da466eb115c\" (UID: \"2a51d19b-f2d1-4164-bb39-6da466eb115c\") " Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.595167 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlx24\" (UniqueName: \"kubernetes.io/projected/1994b16c-0ff3-4534-be1a-fcc718dd6eed-kube-api-access-xlx24\") pod \"1994b16c-0ff3-4534-be1a-fcc718dd6eed\" (UID: \"1994b16c-0ff3-4534-be1a-fcc718dd6eed\") " Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.595292 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85334287-4d7c-428a-bf1d-20f5511f442a-operator-scripts\") pod \"85334287-4d7c-428a-bf1d-20f5511f442a\" (UID: \"85334287-4d7c-428a-bf1d-20f5511f442a\") " Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.595343 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a51d19b-f2d1-4164-bb39-6da466eb115c-operator-scripts\") pod \"2a51d19b-f2d1-4164-bb39-6da466eb115c\" (UID: \"2a51d19b-f2d1-4164-bb39-6da466eb115c\") " Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.596024 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a51d19b-f2d1-4164-bb39-6da466eb115c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a51d19b-f2d1-4164-bb39-6da466eb115c" (UID: "2a51d19b-f2d1-4164-bb39-6da466eb115c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.600485 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85334287-4d7c-428a-bf1d-20f5511f442a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "85334287-4d7c-428a-bf1d-20f5511f442a" (UID: "85334287-4d7c-428a-bf1d-20f5511f442a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.600555 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a51d19b-f2d1-4164-bb39-6da466eb115c-kube-api-access-kxkdf" (OuterVolumeSpecName: "kube-api-access-kxkdf") pod "2a51d19b-f2d1-4164-bb39-6da466eb115c" (UID: "2a51d19b-f2d1-4164-bb39-6da466eb115c"). InnerVolumeSpecName "kube-api-access-kxkdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.600754 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1994b16c-0ff3-4534-be1a-fcc718dd6eed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1994b16c-0ff3-4534-be1a-fcc718dd6eed" (UID: "1994b16c-0ff3-4534-be1a-fcc718dd6eed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.604431 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1994b16c-0ff3-4534-be1a-fcc718dd6eed-kube-api-access-xlx24" (OuterVolumeSpecName: "kube-api-access-xlx24") pod "1994b16c-0ff3-4534-be1a-fcc718dd6eed" (UID: "1994b16c-0ff3-4534-be1a-fcc718dd6eed"). InnerVolumeSpecName "kube-api-access-xlx24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.604580 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85334287-4d7c-428a-bf1d-20f5511f442a-kube-api-access-nnk9f" (OuterVolumeSpecName: "kube-api-access-nnk9f") pod "85334287-4d7c-428a-bf1d-20f5511f442a" (UID: "85334287-4d7c-428a-bf1d-20f5511f442a"). InnerVolumeSpecName "kube-api-access-nnk9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.672128 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-41b8-account-create-update-s8rrk" event={"ID":"2a51d19b-f2d1-4164-bb39-6da466eb115c","Type":"ContainerDied","Data":"8837b208c1aed7bc3278da32da45f86cd5fc342990e9dd78782c2214c6eef48f"} Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.672166 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8837b208c1aed7bc3278da32da45f86cd5fc342990e9dd78782c2214c6eef48f" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.672235 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-41b8-account-create-update-s8rrk" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.677663 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6vfxh" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.677656 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6vfxh" event={"ID":"1994b16c-0ff3-4534-be1a-fcc718dd6eed","Type":"ContainerDied","Data":"2b0d6389c245f87657711321f3572f4922e978990f650038c03f5482dab30425"} Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.677816 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b0d6389c245f87657711321f3572f4922e978990f650038c03f5482dab30425" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.680233 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-51f6-account-create-update-qdrbh" event={"ID":"9bc6f6e3-3fae-4476-9ec3-db95f636ac09","Type":"ContainerDied","Data":"49c69ec7592828f69d9a9283f00b8f10c6560054c9c065a3ea1b58f260e7a241"} Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.680286 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49c69ec7592828f69d9a9283f00b8f10c6560054c9c065a3ea1b58f260e7a241" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.680373 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-51f6-account-create-update-qdrbh" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.693565 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-733a-account-create-update-x8f26" event={"ID":"85334287-4d7c-428a-bf1d-20f5511f442a","Type":"ContainerDied","Data":"2e2bdd12c629b6b7d82434a3b64e7adb0781c0bd852fce798025d458e1ad9705"} Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.693889 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e2bdd12c629b6b7d82434a3b64e7adb0781c0bd852fce798025d458e1ad9705" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.694450 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-733a-account-create-update-x8f26" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.695464 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-787f455647-ngpww" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.698516 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/85334287-4d7c-428a-bf1d-20f5511f442a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.698741 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a51d19b-f2d1-4164-bb39-6da466eb115c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.698785 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnk9f\" (UniqueName: \"kubernetes.io/projected/85334287-4d7c-428a-bf1d-20f5511f442a-kube-api-access-nnk9f\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.698801 4981 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1994b16c-0ff3-4534-be1a-fcc718dd6eed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.698813 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxkdf\" (UniqueName: \"kubernetes.io/projected/2a51d19b-f2d1-4164-bb39-6da466eb115c-kube-api-access-kxkdf\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.698825 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlx24\" (UniqueName: \"kubernetes.io/projected/1994b16c-0ff3-4534-be1a-fcc718dd6eed-kube-api-access-xlx24\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:45 crc kubenswrapper[4981]: I0128 15:23:45.965032 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 15:23:46 crc kubenswrapper[4981]: I0128 15:23:46.038786 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 15:23:46 crc kubenswrapper[4981]: I0128 15:23:46.702543 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ae859ea5-0c04-4736-9e30-6bf1337dd21d" containerName="cinder-scheduler" containerID="cri-o://bd1ec6ad954c7a01efa3740296798499ea2fee68df00d0aa9ddd42177d6096b0" gracePeriod=30 Jan 28 15:23:46 crc kubenswrapper[4981]: I0128 15:23:46.703113 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="ae859ea5-0c04-4736-9e30-6bf1337dd21d" containerName="probe" containerID="cri-o://c2027115c3035a634ec281c625d1cc45c30403a0f167041e2fa1d88f9c6340c1" gracePeriod=30 Jan 28 15:23:47 crc kubenswrapper[4981]: I0128 15:23:47.158851 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 15:23:47 crc kubenswrapper[4981]: I0128 15:23:47.159156 4981 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 15:23:47 crc kubenswrapper[4981]: I0128 15:23:47.398725 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 15:23:47 crc kubenswrapper[4981]: I0128 15:23:47.711089 4981 generic.go:334] "Generic (PLEG): container finished" podID="ae859ea5-0c04-4736-9e30-6bf1337dd21d" containerID="c2027115c3035a634ec281c625d1cc45c30403a0f167041e2fa1d88f9c6340c1" exitCode=0 Jan 28 15:23:47 crc kubenswrapper[4981]: I0128 15:23:47.711172 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae859ea5-0c04-4736-9e30-6bf1337dd21d","Type":"ContainerDied","Data":"c2027115c3035a634ec281c625d1cc45c30403a0f167041e2fa1d88f9c6340c1"} Jan 28 15:23:48 crc kubenswrapper[4981]: I0128 15:23:48.642339 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:23:48 crc kubenswrapper[4981]: I0128 15:23:48.722491 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-vmsvl"] Jan 28 15:23:48 crc kubenswrapper[4981]: I0128 15:23:48.722894 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" podUID="7f20dc76-f79f-4102-822b-a5e03bb18abc" containerName="dnsmasq-dns" containerID="cri-o://6aecf75b68f1269efd0269b8d52309e0d75c28e0a0f68d29b4e17746131be3d3" gracePeriod=10 Jan 28 15:23:48 crc kubenswrapper[4981]: E0128 15:23:48.875112 4981 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f20dc76_f79f_4102_822b_a5e03bb18abc.slice/crio-conmon-6aecf75b68f1269efd0269b8d52309e0d75c28e0a0f68d29b4e17746131be3d3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f20dc76_f79f_4102_822b_a5e03bb18abc.slice/crio-6aecf75b68f1269efd0269b8d52309e0d75c28e0a0f68d29b4e17746131be3d3.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:23:48 crc kubenswrapper[4981]: I0128 15:23:48.879745 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" podUID="7f20dc76-f79f-4102-822b-a5e03bb18abc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: connect: connection refused" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.310362 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.436794 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-dns-swift-storage-0\") pod \"7f20dc76-f79f-4102-822b-a5e03bb18abc\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.436845 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-ovsdbserver-nb\") pod \"7f20dc76-f79f-4102-822b-a5e03bb18abc\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.436889 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-config\") pod \"7f20dc76-f79f-4102-822b-a5e03bb18abc\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.436909 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-586hg\" (UniqueName: \"kubernetes.io/projected/7f20dc76-f79f-4102-822b-a5e03bb18abc-kube-api-access-586hg\") pod \"7f20dc76-f79f-4102-822b-a5e03bb18abc\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.436935 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-ovsdbserver-sb\") pod \"7f20dc76-f79f-4102-822b-a5e03bb18abc\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.437109 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-dns-svc\") pod \"7f20dc76-f79f-4102-822b-a5e03bb18abc\" (UID: \"7f20dc76-f79f-4102-822b-a5e03bb18abc\") " Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.450276 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f20dc76-f79f-4102-822b-a5e03bb18abc-kube-api-access-586hg" (OuterVolumeSpecName: "kube-api-access-586hg") pod "7f20dc76-f79f-4102-822b-a5e03bb18abc" (UID: "7f20dc76-f79f-4102-822b-a5e03bb18abc"). InnerVolumeSpecName "kube-api-access-586hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.494822 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-config" (OuterVolumeSpecName: "config") pod "7f20dc76-f79f-4102-822b-a5e03bb18abc" (UID: "7f20dc76-f79f-4102-822b-a5e03bb18abc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.499344 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f20dc76-f79f-4102-822b-a5e03bb18abc" (UID: "7f20dc76-f79f-4102-822b-a5e03bb18abc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.514022 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7f20dc76-f79f-4102-822b-a5e03bb18abc" (UID: "7f20dc76-f79f-4102-822b-a5e03bb18abc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.521112 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f20dc76-f79f-4102-822b-a5e03bb18abc" (UID: "7f20dc76-f79f-4102-822b-a5e03bb18abc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.521698 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7f20dc76-f79f-4102-822b-a5e03bb18abc" (UID: "7f20dc76-f79f-4102-822b-a5e03bb18abc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.539671 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.539739 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.539760 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.539773 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.539785 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-586hg\" (UniqueName: \"kubernetes.io/projected/7f20dc76-f79f-4102-822b-a5e03bb18abc-kube-api-access-586hg\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.539801 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7f20dc76-f79f-4102-822b-a5e03bb18abc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.739051 4981 generic.go:334] "Generic (PLEG): container finished" podID="7f20dc76-f79f-4102-822b-a5e03bb18abc" containerID="6aecf75b68f1269efd0269b8d52309e0d75c28e0a0f68d29b4e17746131be3d3" exitCode=0 Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.739128 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.739181 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" event={"ID":"7f20dc76-f79f-4102-822b-a5e03bb18abc","Type":"ContainerDied","Data":"6aecf75b68f1269efd0269b8d52309e0d75c28e0a0f68d29b4e17746131be3d3"} Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.739525 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59d5ff467f-vmsvl" event={"ID":"7f20dc76-f79f-4102-822b-a5e03bb18abc","Type":"ContainerDied","Data":"ee7e52aa41bee27c44958bfd14ecf3b006a85eecfd23b0e2203cbff37345d4a6"} Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.739551 4981 scope.go:117] "RemoveContainer" containerID="6aecf75b68f1269efd0269b8d52309e0d75c28e0a0f68d29b4e17746131be3d3" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.776409 4981 scope.go:117] "RemoveContainer" containerID="a20b15fa159003f0eb1c14167e208b7bee1105315355e1147bc1e70c9d7ccaa2" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.829544 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-vmsvl"] Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.852362 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59d5ff467f-vmsvl"] Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.858500 4981 scope.go:117] "RemoveContainer" containerID="6aecf75b68f1269efd0269b8d52309e0d75c28e0a0f68d29b4e17746131be3d3" Jan 28 15:23:49 crc kubenswrapper[4981]: E0128 15:23:49.859156 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aecf75b68f1269efd0269b8d52309e0d75c28e0a0f68d29b4e17746131be3d3\": container with ID starting with 6aecf75b68f1269efd0269b8d52309e0d75c28e0a0f68d29b4e17746131be3d3 not found: ID does not exist" containerID="6aecf75b68f1269efd0269b8d52309e0d75c28e0a0f68d29b4e17746131be3d3" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.859305 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aecf75b68f1269efd0269b8d52309e0d75c28e0a0f68d29b4e17746131be3d3"} err="failed to get container status \"6aecf75b68f1269efd0269b8d52309e0d75c28e0a0f68d29b4e17746131be3d3\": rpc error: code = NotFound desc = could not find container \"6aecf75b68f1269efd0269b8d52309e0d75c28e0a0f68d29b4e17746131be3d3\": container with ID starting with 6aecf75b68f1269efd0269b8d52309e0d75c28e0a0f68d29b4e17746131be3d3 not found: ID does not exist" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.859489 4981 scope.go:117] "RemoveContainer" containerID="a20b15fa159003f0eb1c14167e208b7bee1105315355e1147bc1e70c9d7ccaa2" Jan 28 15:23:49 crc kubenswrapper[4981]: E0128 15:23:49.859979 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a20b15fa159003f0eb1c14167e208b7bee1105315355e1147bc1e70c9d7ccaa2\": container with ID starting with a20b15fa159003f0eb1c14167e208b7bee1105315355e1147bc1e70c9d7ccaa2 not found: ID does not exist" containerID="a20b15fa159003f0eb1c14167e208b7bee1105315355e1147bc1e70c9d7ccaa2" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.860038 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a20b15fa159003f0eb1c14167e208b7bee1105315355e1147bc1e70c9d7ccaa2"} err="failed to get container status \"a20b15fa159003f0eb1c14167e208b7bee1105315355e1147bc1e70c9d7ccaa2\": rpc error: code = NotFound desc = could not find container \"a20b15fa159003f0eb1c14167e208b7bee1105315355e1147bc1e70c9d7ccaa2\": container with ID starting with a20b15fa159003f0eb1c14167e208b7bee1105315355e1147bc1e70c9d7ccaa2 not found: ID does not exist" Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.898766 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:23:49 crc kubenswrapper[4981]: I0128 15:23:49.898834 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.371377 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fpszk"] Jan 28 15:23:50 crc kubenswrapper[4981]: E0128 15:23:50.371833 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f20dc76-f79f-4102-822b-a5e03bb18abc" containerName="dnsmasq-dns" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.371854 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f20dc76-f79f-4102-822b-a5e03bb18abc" containerName="dnsmasq-dns" Jan 28 15:23:50 crc kubenswrapper[4981]: E0128 15:23:50.371863 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73d5db70-d787-4d5b-9c2c-64859f2acf0c" containerName="mariadb-database-create" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.371873 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="73d5db70-d787-4d5b-9c2c-64859f2acf0c" containerName="mariadb-database-create" Jan 28 15:23:50 crc kubenswrapper[4981]: E0128 15:23:50.371889 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1994b16c-0ff3-4534-be1a-fcc718dd6eed" containerName="mariadb-database-create" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.371896 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1994b16c-0ff3-4534-be1a-fcc718dd6eed" containerName="mariadb-database-create" Jan 28 15:23:50 crc kubenswrapper[4981]: E0128 15:23:50.371906 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f20dc76-f79f-4102-822b-a5e03bb18abc" containerName="init" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.371912 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f20dc76-f79f-4102-822b-a5e03bb18abc" containerName="init" Jan 28 15:23:50 crc kubenswrapper[4981]: E0128 15:23:50.371928 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bc6f6e3-3fae-4476-9ec3-db95f636ac09" containerName="mariadb-account-create-update" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.371933 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bc6f6e3-3fae-4476-9ec3-db95f636ac09" containerName="mariadb-account-create-update" Jan 28 15:23:50 crc kubenswrapper[4981]: E0128 15:23:50.371944 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a51d19b-f2d1-4164-bb39-6da466eb115c" containerName="mariadb-account-create-update" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.371951 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a51d19b-f2d1-4164-bb39-6da466eb115c" containerName="mariadb-account-create-update" Jan 28 15:23:50 crc kubenswrapper[4981]: E0128 15:23:50.371961 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85334287-4d7c-428a-bf1d-20f5511f442a" containerName="mariadb-account-create-update" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.371967 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="85334287-4d7c-428a-bf1d-20f5511f442a" containerName="mariadb-account-create-update" Jan 28 15:23:50 crc kubenswrapper[4981]: E0128 15:23:50.371987 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d4a209b-c995-42fa-9a1c-82a1c9c60e91" containerName="mariadb-database-create" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.371992 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d4a209b-c995-42fa-9a1c-82a1c9c60e91" containerName="mariadb-database-create" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.372151 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="85334287-4d7c-428a-bf1d-20f5511f442a" containerName="mariadb-account-create-update" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.372165 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a51d19b-f2d1-4164-bb39-6da466eb115c" containerName="mariadb-account-create-update" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.372174 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f20dc76-f79f-4102-822b-a5e03bb18abc" containerName="dnsmasq-dns" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.372181 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="1994b16c-0ff3-4534-be1a-fcc718dd6eed" containerName="mariadb-database-create" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.372210 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="73d5db70-d787-4d5b-9c2c-64859f2acf0c" containerName="mariadb-database-create" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.372221 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bc6f6e3-3fae-4476-9ec3-db95f636ac09" containerName="mariadb-account-create-update" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.372231 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d4a209b-c995-42fa-9a1c-82a1c9c60e91" containerName="mariadb-database-create" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.372824 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.375646 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nnzdz" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.376928 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.377241 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.396504 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fpszk"] Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.559131 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-config-data\") pod \"nova-cell0-conductor-db-sync-fpszk\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.559491 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-scripts\") pod \"nova-cell0-conductor-db-sync-fpszk\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.559585 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq92t\" (UniqueName: \"kubernetes.io/projected/7267f1cd-1602-4397-a6c4-668efb787be4-kube-api-access-pq92t\") pod \"nova-cell0-conductor-db-sync-fpszk\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.559683 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fpszk\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.661842 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-scripts\") pod \"nova-cell0-conductor-db-sync-fpszk\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.661884 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq92t\" (UniqueName: \"kubernetes.io/projected/7267f1cd-1602-4397-a6c4-668efb787be4-kube-api-access-pq92t\") pod \"nova-cell0-conductor-db-sync-fpszk\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.661947 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fpszk\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.662022 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-config-data\") pod \"nova-cell0-conductor-db-sync-fpszk\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.667060 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-fpszk\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.668655 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-scripts\") pod \"nova-cell0-conductor-db-sync-fpszk\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.679811 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq92t\" (UniqueName: \"kubernetes.io/projected/7267f1cd-1602-4397-a6c4-668efb787be4-kube-api-access-pq92t\") pod \"nova-cell0-conductor-db-sync-fpszk\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.679951 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-config-data\") pod \"nova-cell0-conductor-db-sync-fpszk\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:23:50 crc kubenswrapper[4981]: I0128 15:23:50.687333 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.155476 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fpszk"] Jan 28 15:23:51 crc kubenswrapper[4981]: W0128 15:23:51.175112 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7267f1cd_1602_4397_a6c4_668efb787be4.slice/crio-ff7e516939881ed079a1d932882326b7c69ef1796bb2dcad9cad0b29b7cf1d93 WatchSource:0}: Error finding container ff7e516939881ed079a1d932882326b7c69ef1796bb2dcad9cad0b29b7cf1d93: Status 404 returned error can't find the container with id ff7e516939881ed079a1d932882326b7c69ef1796bb2dcad9cad0b29b7cf1d93 Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.341426 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f20dc76-f79f-4102-822b-a5e03bb18abc" path="/var/lib/kubelet/pods/7f20dc76-f79f-4102-822b-a5e03bb18abc/volumes" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.524908 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.681990 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-scripts\") pod \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.682142 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-config-data\") pod \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.682162 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae859ea5-0c04-4736-9e30-6bf1337dd21d-etc-machine-id\") pod \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.682219 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pz4p\" (UniqueName: \"kubernetes.io/projected/ae859ea5-0c04-4736-9e30-6bf1337dd21d-kube-api-access-2pz4p\") pod \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.682260 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-combined-ca-bundle\") pod \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.682308 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-config-data-custom\") pod \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\" (UID: \"ae859ea5-0c04-4736-9e30-6bf1337dd21d\") " Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.682762 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ae859ea5-0c04-4736-9e30-6bf1337dd21d-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ae859ea5-0c04-4736-9e30-6bf1337dd21d" (UID: "ae859ea5-0c04-4736-9e30-6bf1337dd21d"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.691519 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-scripts" (OuterVolumeSpecName: "scripts") pod "ae859ea5-0c04-4736-9e30-6bf1337dd21d" (UID: "ae859ea5-0c04-4736-9e30-6bf1337dd21d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.693322 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae859ea5-0c04-4736-9e30-6bf1337dd21d-kube-api-access-2pz4p" (OuterVolumeSpecName: "kube-api-access-2pz4p") pod "ae859ea5-0c04-4736-9e30-6bf1337dd21d" (UID: "ae859ea5-0c04-4736-9e30-6bf1337dd21d"). InnerVolumeSpecName "kube-api-access-2pz4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.702463 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ae859ea5-0c04-4736-9e30-6bf1337dd21d" (UID: "ae859ea5-0c04-4736-9e30-6bf1337dd21d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.758823 4981 generic.go:334] "Generic (PLEG): container finished" podID="ae859ea5-0c04-4736-9e30-6bf1337dd21d" containerID="bd1ec6ad954c7a01efa3740296798499ea2fee68df00d0aa9ddd42177d6096b0" exitCode=0 Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.758879 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae859ea5-0c04-4736-9e30-6bf1337dd21d","Type":"ContainerDied","Data":"bd1ec6ad954c7a01efa3740296798499ea2fee68df00d0aa9ddd42177d6096b0"} Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.758907 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"ae859ea5-0c04-4736-9e30-6bf1337dd21d","Type":"ContainerDied","Data":"e3cc00b0b3586fcf86777d2aaf8d1fb4d9273b296a929e29c0f8c7dd47284bbe"} Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.758928 4981 scope.go:117] "RemoveContainer" containerID="c2027115c3035a634ec281c625d1cc45c30403a0f167041e2fa1d88f9c6340c1" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.759052 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.761365 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fpszk" event={"ID":"7267f1cd-1602-4397-a6c4-668efb787be4","Type":"ContainerStarted","Data":"ff7e516939881ed079a1d932882326b7c69ef1796bb2dcad9cad0b29b7cf1d93"} Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.768726 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae859ea5-0c04-4736-9e30-6bf1337dd21d" (UID: "ae859ea5-0c04-4736-9e30-6bf1337dd21d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.784386 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pz4p\" (UniqueName: \"kubernetes.io/projected/ae859ea5-0c04-4736-9e30-6bf1337dd21d-kube-api-access-2pz4p\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.784422 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.784435 4981 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.784446 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.784460 4981 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ae859ea5-0c04-4736-9e30-6bf1337dd21d-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.792795 4981 scope.go:117] "RemoveContainer" containerID="bd1ec6ad954c7a01efa3740296798499ea2fee68df00d0aa9ddd42177d6096b0" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.808239 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-config-data" (OuterVolumeSpecName: "config-data") pod "ae859ea5-0c04-4736-9e30-6bf1337dd21d" (UID: "ae859ea5-0c04-4736-9e30-6bf1337dd21d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.819135 4981 scope.go:117] "RemoveContainer" containerID="c2027115c3035a634ec281c625d1cc45c30403a0f167041e2fa1d88f9c6340c1" Jan 28 15:23:51 crc kubenswrapper[4981]: E0128 15:23:51.819615 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2027115c3035a634ec281c625d1cc45c30403a0f167041e2fa1d88f9c6340c1\": container with ID starting with c2027115c3035a634ec281c625d1cc45c30403a0f167041e2fa1d88f9c6340c1 not found: ID does not exist" containerID="c2027115c3035a634ec281c625d1cc45c30403a0f167041e2fa1d88f9c6340c1" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.819664 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2027115c3035a634ec281c625d1cc45c30403a0f167041e2fa1d88f9c6340c1"} err="failed to get container status \"c2027115c3035a634ec281c625d1cc45c30403a0f167041e2fa1d88f9c6340c1\": rpc error: code = NotFound desc = could not find container \"c2027115c3035a634ec281c625d1cc45c30403a0f167041e2fa1d88f9c6340c1\": container with ID starting with c2027115c3035a634ec281c625d1cc45c30403a0f167041e2fa1d88f9c6340c1 not found: ID does not exist" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.819691 4981 scope.go:117] "RemoveContainer" containerID="bd1ec6ad954c7a01efa3740296798499ea2fee68df00d0aa9ddd42177d6096b0" Jan 28 15:23:51 crc kubenswrapper[4981]: E0128 15:23:51.820051 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd1ec6ad954c7a01efa3740296798499ea2fee68df00d0aa9ddd42177d6096b0\": container with ID starting with bd1ec6ad954c7a01efa3740296798499ea2fee68df00d0aa9ddd42177d6096b0 not found: ID does not exist" containerID="bd1ec6ad954c7a01efa3740296798499ea2fee68df00d0aa9ddd42177d6096b0" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.820102 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd1ec6ad954c7a01efa3740296798499ea2fee68df00d0aa9ddd42177d6096b0"} err="failed to get container status \"bd1ec6ad954c7a01efa3740296798499ea2fee68df00d0aa9ddd42177d6096b0\": rpc error: code = NotFound desc = could not find container \"bd1ec6ad954c7a01efa3740296798499ea2fee68df00d0aa9ddd42177d6096b0\": container with ID starting with bd1ec6ad954c7a01efa3740296798499ea2fee68df00d0aa9ddd42177d6096b0 not found: ID does not exist" Jan 28 15:23:51 crc kubenswrapper[4981]: I0128 15:23:51.885857 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae859ea5-0c04-4736-9e30-6bf1337dd21d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.067620 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.096203 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.103841 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.123507 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 15:23:52 crc kubenswrapper[4981]: E0128 15:23:52.123887 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae859ea5-0c04-4736-9e30-6bf1337dd21d" containerName="probe" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.123903 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae859ea5-0c04-4736-9e30-6bf1337dd21d" containerName="probe" Jan 28 15:23:52 crc kubenswrapper[4981]: E0128 15:23:52.123913 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae859ea5-0c04-4736-9e30-6bf1337dd21d" containerName="cinder-scheduler" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.123919 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae859ea5-0c04-4736-9e30-6bf1337dd21d" containerName="cinder-scheduler" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.124112 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae859ea5-0c04-4736-9e30-6bf1337dd21d" containerName="cinder-scheduler" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.124132 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae859ea5-0c04-4736-9e30-6bf1337dd21d" containerName="probe" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.125103 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.166502 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.183802 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.295048 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8617985e-2166-4325-82d1-6004e7eff07d-scripts\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.295118 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8617985e-2166-4325-82d1-6004e7eff07d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.295206 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99vfq\" (UniqueName: \"kubernetes.io/projected/8617985e-2166-4325-82d1-6004e7eff07d-kube-api-access-99vfq\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.295234 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8617985e-2166-4325-82d1-6004e7eff07d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.295258 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8617985e-2166-4325-82d1-6004e7eff07d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.295458 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8617985e-2166-4325-82d1-6004e7eff07d-config-data\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.396842 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8617985e-2166-4325-82d1-6004e7eff07d-scripts\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.397171 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8617985e-2166-4325-82d1-6004e7eff07d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.397250 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99vfq\" (UniqueName: \"kubernetes.io/projected/8617985e-2166-4325-82d1-6004e7eff07d-kube-api-access-99vfq\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.397275 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8617985e-2166-4325-82d1-6004e7eff07d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.397296 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8617985e-2166-4325-82d1-6004e7eff07d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.397341 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8617985e-2166-4325-82d1-6004e7eff07d-config-data\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.397606 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8617985e-2166-4325-82d1-6004e7eff07d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.405142 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8617985e-2166-4325-82d1-6004e7eff07d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.405446 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8617985e-2166-4325-82d1-6004e7eff07d-config-data\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.405676 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8617985e-2166-4325-82d1-6004e7eff07d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.413038 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99vfq\" (UniqueName: \"kubernetes.io/projected/8617985e-2166-4325-82d1-6004e7eff07d-kube-api-access-99vfq\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.422712 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8617985e-2166-4325-82d1-6004e7eff07d-scripts\") pod \"cinder-scheduler-0\" (UID: \"8617985e-2166-4325-82d1-6004e7eff07d\") " pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.506877 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 15:23:52 crc kubenswrapper[4981]: I0128 15:23:52.951391 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 15:23:53 crc kubenswrapper[4981]: I0128 15:23:53.020349 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 28 15:23:53 crc kubenswrapper[4981]: I0128 15:23:53.332976 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae859ea5-0c04-4736-9e30-6bf1337dd21d" path="/var/lib/kubelet/pods/ae859ea5-0c04-4736-9e30-6bf1337dd21d/volumes" Jan 28 15:23:53 crc kubenswrapper[4981]: I0128 15:23:53.785905 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8617985e-2166-4325-82d1-6004e7eff07d","Type":"ContainerStarted","Data":"33cd4e130bbb5c2b5b1b9a4a2daf8091aa32c24c955bd757b368240d3c124748"} Jan 28 15:23:53 crc kubenswrapper[4981]: I0128 15:23:53.786307 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8617985e-2166-4325-82d1-6004e7eff07d","Type":"ContainerStarted","Data":"a88f5873dd7a7585a68e83dc98ec1785645b7625cb4246f5fe33c1432450d8bb"} Jan 28 15:23:53 crc kubenswrapper[4981]: I0128 15:23:53.819536 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:53 crc kubenswrapper[4981]: I0128 15:23:53.830007 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-77d87cc6cd-znvvw" Jan 28 15:23:54 crc kubenswrapper[4981]: I0128 15:23:54.797851 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8617985e-2166-4325-82d1-6004e7eff07d","Type":"ContainerStarted","Data":"066c8b30af9315b276624dfaff4162df4d6105b92c754ef777d0f9979831d072"} Jan 28 15:23:54 crc kubenswrapper[4981]: I0128 15:23:54.818253 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.818236153 podStartE2EDuration="2.818236153s" podCreationTimestamp="2026-01-28 15:23:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:23:54.813020687 +0000 UTC m=+1246.265178928" watchObservedRunningTime="2026-01-28 15:23:54.818236153 +0000 UTC m=+1246.270394394" Jan 28 15:23:57 crc kubenswrapper[4981]: I0128 15:23:57.507783 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 15:24:02 crc kubenswrapper[4981]: I0128 15:24:02.753998 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 15:24:03 crc kubenswrapper[4981]: I0128 15:24:03.881081 4981 generic.go:334] "Generic (PLEG): container finished" podID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerID="b2c594831f6d498d299cde1375c09c2d1c77a97e84244c84afdae6e480592713" exitCode=137 Jan 28 15:24:03 crc kubenswrapper[4981]: I0128 15:24:03.881166 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46","Type":"ContainerDied","Data":"b2c594831f6d498d299cde1375c09c2d1c77a97e84244c84afdae6e480592713"} Jan 28 15:24:04 crc kubenswrapper[4981]: I0128 15:24:04.891826 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46","Type":"ContainerDied","Data":"3268877eb2f603f41a22fd9b1a4ce027936784ecb1dbcaddcaeb23d7985c2d39"} Jan 28 15:24:04 crc kubenswrapper[4981]: I0128 15:24:04.892112 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3268877eb2f603f41a22fd9b1a4ce027936784ecb1dbcaddcaeb23d7985c2d39" Jan 28 15:24:04 crc kubenswrapper[4981]: I0128 15:24:04.941415 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:24:04 crc kubenswrapper[4981]: I0128 15:24:04.975647 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2njx6\" (UniqueName: \"kubernetes.io/projected/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-kube-api-access-2njx6\") pod \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " Jan 28 15:24:04 crc kubenswrapper[4981]: I0128 15:24:04.975749 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-combined-ca-bundle\") pod \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " Jan 28 15:24:04 crc kubenswrapper[4981]: I0128 15:24:04.975817 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-sg-core-conf-yaml\") pod \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " Jan 28 15:24:04 crc kubenswrapper[4981]: I0128 15:24:04.975834 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-config-data\") pod \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " Jan 28 15:24:04 crc kubenswrapper[4981]: I0128 15:24:04.975853 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-run-httpd\") pod \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " Jan 28 15:24:04 crc kubenswrapper[4981]: I0128 15:24:04.975907 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-scripts\") pod \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " Jan 28 15:24:04 crc kubenswrapper[4981]: I0128 15:24:04.975978 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-log-httpd\") pod \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\" (UID: \"9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46\") " Jan 28 15:24:04 crc kubenswrapper[4981]: I0128 15:24:04.976823 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" (UID: "9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:04 crc kubenswrapper[4981]: I0128 15:24:04.979686 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" (UID: "9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:04 crc kubenswrapper[4981]: I0128 15:24:04.984961 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-scripts" (OuterVolumeSpecName: "scripts") pod "9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" (UID: "9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:04 crc kubenswrapper[4981]: I0128 15:24:04.992255 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-kube-api-access-2njx6" (OuterVolumeSpecName: "kube-api-access-2njx6") pod "9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" (UID: "9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46"). InnerVolumeSpecName "kube-api-access-2njx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.019614 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" (UID: "9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.077780 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.077818 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2njx6\" (UniqueName: \"kubernetes.io/projected/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-kube-api-access-2njx6\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.077834 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.077849 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.077861 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.086560 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" (UID: "9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.109758 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-config-data" (OuterVolumeSpecName: "config-data") pod "9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" (UID: "9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.179562 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.179612 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.901096 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.902722 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fpszk" event={"ID":"7267f1cd-1602-4397-a6c4-668efb787be4","Type":"ContainerStarted","Data":"e9e00dd48ca4452cd65fcfee7bebd1e3bd503c4ef5a635a2575aac1b82826d42"} Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.922839 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-fpszk" podStartSLOduration=2.394481942 podStartE2EDuration="15.922817788s" podCreationTimestamp="2026-01-28 15:23:50 +0000 UTC" firstStartedPulling="2026-01-28 15:23:51.178497728 +0000 UTC m=+1242.630655999" lastFinishedPulling="2026-01-28 15:24:04.706833604 +0000 UTC m=+1256.158991845" observedRunningTime="2026-01-28 15:24:05.916049481 +0000 UTC m=+1257.368207732" watchObservedRunningTime="2026-01-28 15:24:05.922817788 +0000 UTC m=+1257.374976049" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.941541 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.950743 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.968844 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:05 crc kubenswrapper[4981]: E0128 15:24:05.969291 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="proxy-httpd" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.969312 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="proxy-httpd" Jan 28 15:24:05 crc kubenswrapper[4981]: E0128 15:24:05.969333 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="ceilometer-central-agent" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.969342 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="ceilometer-central-agent" Jan 28 15:24:05 crc kubenswrapper[4981]: E0128 15:24:05.969363 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="sg-core" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.969370 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="sg-core" Jan 28 15:24:05 crc kubenswrapper[4981]: E0128 15:24:05.969382 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="ceilometer-notification-agent" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.969390 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="ceilometer-notification-agent" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.969579 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="ceilometer-central-agent" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.969606 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="proxy-httpd" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.969620 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="ceilometer-notification-agent" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.969631 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" containerName="sg-core" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.971510 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.978180 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.978336 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 15:24:05 crc kubenswrapper[4981]: I0128 15:24:05.979316 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.095256 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-scripts\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.095563 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-config-data\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.095583 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2c8m\" (UniqueName: \"kubernetes.io/projected/fa78157e-7c85-490d-9b80-acd93eed635c-kube-api-access-l2c8m\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.095620 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa78157e-7c85-490d-9b80-acd93eed635c-run-httpd\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.095638 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa78157e-7c85-490d-9b80-acd93eed635c-log-httpd\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.095660 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.095949 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.197245 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-scripts\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.197345 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2c8m\" (UniqueName: \"kubernetes.io/projected/fa78157e-7c85-490d-9b80-acd93eed635c-kube-api-access-l2c8m\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.197374 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-config-data\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.197440 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa78157e-7c85-490d-9b80-acd93eed635c-run-httpd\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.197481 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa78157e-7c85-490d-9b80-acd93eed635c-log-httpd\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.197523 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.197637 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.198047 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa78157e-7c85-490d-9b80-acd93eed635c-run-httpd\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.198387 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa78157e-7c85-490d-9b80-acd93eed635c-log-httpd\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.203927 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.207085 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-config-data\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.210093 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.210108 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-scripts\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.213984 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2c8m\" (UniqueName: \"kubernetes.io/projected/fa78157e-7c85-490d-9b80-acd93eed635c-kube-api-access-l2c8m\") pod \"ceilometer-0\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.285115 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.784082 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:06 crc kubenswrapper[4981]: I0128 15:24:06.910485 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa78157e-7c85-490d-9b80-acd93eed635c","Type":"ContainerStarted","Data":"2b23248db98986df1b8f92ed267bb211820ddf79551d9187c93ea669476919ad"} Jan 28 15:24:07 crc kubenswrapper[4981]: I0128 15:24:07.328132 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46" path="/var/lib/kubelet/pods/9ae3fd27-af3a-4dc4-b377-e3fcc4ccca46/volumes" Jan 28 15:24:07 crc kubenswrapper[4981]: I0128 15:24:07.943959 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa78157e-7c85-490d-9b80-acd93eed635c","Type":"ContainerStarted","Data":"262a8eda16b99f0931155673a90a88ef0e919702cd5566791d4811e288c06f8b"} Jan 28 15:24:08 crc kubenswrapper[4981]: I0128 15:24:08.672569 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:24:08 crc kubenswrapper[4981]: I0128 15:24:08.953316 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa78157e-7c85-490d-9b80-acd93eed635c","Type":"ContainerStarted","Data":"2a090c5c3b7b6592ca5c03113922ec29b2423a47c9dcee7f31cee4ebd30be48d"} Jan 28 15:24:09 crc kubenswrapper[4981]: I0128 15:24:09.963870 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa78157e-7c85-490d-9b80-acd93eed635c","Type":"ContainerStarted","Data":"7ad852d862fc125de3c83034e6887d883ac9f6094da0bb58702ac8e616efbca9"} Jan 28 15:24:10 crc kubenswrapper[4981]: I0128 15:24:10.157490 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:11 crc kubenswrapper[4981]: I0128 15:24:11.983823 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa78157e-7c85-490d-9b80-acd93eed635c","Type":"ContainerStarted","Data":"e306725170488486f94936690c78be3715bc13be3480ccdd0b6a0e9692985ce2"} Jan 28 15:24:11 crc kubenswrapper[4981]: I0128 15:24:11.985395 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 15:24:11 crc kubenswrapper[4981]: I0128 15:24:11.984108 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="proxy-httpd" containerID="cri-o://e306725170488486f94936690c78be3715bc13be3480ccdd0b6a0e9692985ce2" gracePeriod=30 Jan 28 15:24:11 crc kubenswrapper[4981]: I0128 15:24:11.984135 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="ceilometer-notification-agent" containerID="cri-o://2a090c5c3b7b6592ca5c03113922ec29b2423a47c9dcee7f31cee4ebd30be48d" gracePeriod=30 Jan 28 15:24:11 crc kubenswrapper[4981]: I0128 15:24:11.984179 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="sg-core" containerID="cri-o://7ad852d862fc125de3c83034e6887d883ac9f6094da0bb58702ac8e616efbca9" gracePeriod=30 Jan 28 15:24:11 crc kubenswrapper[4981]: I0128 15:24:11.984052 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="ceilometer-central-agent" containerID="cri-o://262a8eda16b99f0931155673a90a88ef0e919702cd5566791d4811e288c06f8b" gracePeriod=30 Jan 28 15:24:12 crc kubenswrapper[4981]: I0128 15:24:12.015823 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.419853852 podStartE2EDuration="7.015803426s" podCreationTimestamp="2026-01-28 15:24:05 +0000 UTC" firstStartedPulling="2026-01-28 15:24:06.789431994 +0000 UTC m=+1258.241590235" lastFinishedPulling="2026-01-28 15:24:11.385381568 +0000 UTC m=+1262.837539809" observedRunningTime="2026-01-28 15:24:12.009290386 +0000 UTC m=+1263.461448627" watchObservedRunningTime="2026-01-28 15:24:12.015803426 +0000 UTC m=+1263.467961667" Jan 28 15:24:12 crc kubenswrapper[4981]: I0128 15:24:12.515242 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-787f455647-ngpww" Jan 28 15:24:12 crc kubenswrapper[4981]: I0128 15:24:12.572712 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67cc6fc44d-7stvd"] Jan 28 15:24:12 crc kubenswrapper[4981]: I0128 15:24:12.572949 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67cc6fc44d-7stvd" podUID="bd5d9602-d2bd-4dfd-9249-41f61260b5eb" containerName="neutron-api" containerID="cri-o://813330d2834717ca327ca40dd470f63cab153b5cf018d494be56de42bc57af41" gracePeriod=30 Jan 28 15:24:12 crc kubenswrapper[4981]: I0128 15:24:12.573060 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-67cc6fc44d-7stvd" podUID="bd5d9602-d2bd-4dfd-9249-41f61260b5eb" containerName="neutron-httpd" containerID="cri-o://2e39bb5187129b17753782b936e62c27167ad87bbde6468e0fa65e1fa4917249" gracePeriod=30 Jan 28 15:24:12 crc kubenswrapper[4981]: I0128 15:24:12.995368 4981 generic.go:334] "Generic (PLEG): container finished" podID="fa78157e-7c85-490d-9b80-acd93eed635c" containerID="e306725170488486f94936690c78be3715bc13be3480ccdd0b6a0e9692985ce2" exitCode=0 Jan 28 15:24:12 crc kubenswrapper[4981]: I0128 15:24:12.995409 4981 generic.go:334] "Generic (PLEG): container finished" podID="fa78157e-7c85-490d-9b80-acd93eed635c" containerID="7ad852d862fc125de3c83034e6887d883ac9f6094da0bb58702ac8e616efbca9" exitCode=2 Jan 28 15:24:12 crc kubenswrapper[4981]: I0128 15:24:12.995419 4981 generic.go:334] "Generic (PLEG): container finished" podID="fa78157e-7c85-490d-9b80-acd93eed635c" containerID="2a090c5c3b7b6592ca5c03113922ec29b2423a47c9dcee7f31cee4ebd30be48d" exitCode=0 Jan 28 15:24:12 crc kubenswrapper[4981]: I0128 15:24:12.995464 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa78157e-7c85-490d-9b80-acd93eed635c","Type":"ContainerDied","Data":"e306725170488486f94936690c78be3715bc13be3480ccdd0b6a0e9692985ce2"} Jan 28 15:24:12 crc kubenswrapper[4981]: I0128 15:24:12.995498 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa78157e-7c85-490d-9b80-acd93eed635c","Type":"ContainerDied","Data":"7ad852d862fc125de3c83034e6887d883ac9f6094da0bb58702ac8e616efbca9"} Jan 28 15:24:12 crc kubenswrapper[4981]: I0128 15:24:12.995512 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa78157e-7c85-490d-9b80-acd93eed635c","Type":"ContainerDied","Data":"2a090c5c3b7b6592ca5c03113922ec29b2423a47c9dcee7f31cee4ebd30be48d"} Jan 28 15:24:12 crc kubenswrapper[4981]: I0128 15:24:12.997970 4981 generic.go:334] "Generic (PLEG): container finished" podID="bd5d9602-d2bd-4dfd-9249-41f61260b5eb" containerID="2e39bb5187129b17753782b936e62c27167ad87bbde6468e0fa65e1fa4917249" exitCode=0 Jan 28 15:24:12 crc kubenswrapper[4981]: I0128 15:24:12.998000 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc6fc44d-7stvd" event={"ID":"bd5d9602-d2bd-4dfd-9249-41f61260b5eb","Type":"ContainerDied","Data":"2e39bb5187129b17753782b936e62c27167ad87bbde6468e0fa65e1fa4917249"} Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.557579 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.625199 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa78157e-7c85-490d-9b80-acd93eed635c-log-httpd\") pod \"fa78157e-7c85-490d-9b80-acd93eed635c\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.625269 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-combined-ca-bundle\") pod \"fa78157e-7c85-490d-9b80-acd93eed635c\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.625370 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-scripts\") pod \"fa78157e-7c85-490d-9b80-acd93eed635c\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.625407 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-config-data\") pod \"fa78157e-7c85-490d-9b80-acd93eed635c\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.625471 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa78157e-7c85-490d-9b80-acd93eed635c-run-httpd\") pod \"fa78157e-7c85-490d-9b80-acd93eed635c\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.625492 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-sg-core-conf-yaml\") pod \"fa78157e-7c85-490d-9b80-acd93eed635c\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.625511 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2c8m\" (UniqueName: \"kubernetes.io/projected/fa78157e-7c85-490d-9b80-acd93eed635c-kube-api-access-l2c8m\") pod \"fa78157e-7c85-490d-9b80-acd93eed635c\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.625838 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa78157e-7c85-490d-9b80-acd93eed635c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fa78157e-7c85-490d-9b80-acd93eed635c" (UID: "fa78157e-7c85-490d-9b80-acd93eed635c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.625877 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa78157e-7c85-490d-9b80-acd93eed635c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fa78157e-7c85-490d-9b80-acd93eed635c" (UID: "fa78157e-7c85-490d-9b80-acd93eed635c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.626232 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa78157e-7c85-490d-9b80-acd93eed635c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.626252 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fa78157e-7c85-490d-9b80-acd93eed635c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.631540 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa78157e-7c85-490d-9b80-acd93eed635c-kube-api-access-l2c8m" (OuterVolumeSpecName: "kube-api-access-l2c8m") pod "fa78157e-7c85-490d-9b80-acd93eed635c" (UID: "fa78157e-7c85-490d-9b80-acd93eed635c"). InnerVolumeSpecName "kube-api-access-l2c8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.633295 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-scripts" (OuterVolumeSpecName: "scripts") pod "fa78157e-7c85-490d-9b80-acd93eed635c" (UID: "fa78157e-7c85-490d-9b80-acd93eed635c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.665375 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fa78157e-7c85-490d-9b80-acd93eed635c" (UID: "fa78157e-7c85-490d-9b80-acd93eed635c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.727178 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-config-data" (OuterVolumeSpecName: "config-data") pod "fa78157e-7c85-490d-9b80-acd93eed635c" (UID: "fa78157e-7c85-490d-9b80-acd93eed635c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.727965 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-config-data\") pod \"fa78157e-7c85-490d-9b80-acd93eed635c\" (UID: \"fa78157e-7c85-490d-9b80-acd93eed635c\") " Jan 28 15:24:18 crc kubenswrapper[4981]: W0128 15:24:18.728097 4981 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/fa78157e-7c85-490d-9b80-acd93eed635c/volumes/kubernetes.io~secret/config-data Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.728115 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-config-data" (OuterVolumeSpecName: "config-data") pod "fa78157e-7c85-490d-9b80-acd93eed635c" (UID: "fa78157e-7c85-490d-9b80-acd93eed635c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.728603 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.728631 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.728645 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.728659 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2c8m\" (UniqueName: \"kubernetes.io/projected/fa78157e-7c85-490d-9b80-acd93eed635c-kube-api-access-l2c8m\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.733711 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa78157e-7c85-490d-9b80-acd93eed635c" (UID: "fa78157e-7c85-490d-9b80-acd93eed635c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:18 crc kubenswrapper[4981]: I0128 15:24:18.830561 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa78157e-7c85-490d-9b80-acd93eed635c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.052240 4981 generic.go:334] "Generic (PLEG): container finished" podID="7267f1cd-1602-4397-a6c4-668efb787be4" containerID="e9e00dd48ca4452cd65fcfee7bebd1e3bd503c4ef5a635a2575aac1b82826d42" exitCode=0 Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.052334 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fpszk" event={"ID":"7267f1cd-1602-4397-a6c4-668efb787be4","Type":"ContainerDied","Data":"e9e00dd48ca4452cd65fcfee7bebd1e3bd503c4ef5a635a2575aac1b82826d42"} Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.055603 4981 generic.go:334] "Generic (PLEG): container finished" podID="fa78157e-7c85-490d-9b80-acd93eed635c" containerID="262a8eda16b99f0931155673a90a88ef0e919702cd5566791d4811e288c06f8b" exitCode=0 Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.055649 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa78157e-7c85-490d-9b80-acd93eed635c","Type":"ContainerDied","Data":"262a8eda16b99f0931155673a90a88ef0e919702cd5566791d4811e288c06f8b"} Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.055664 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.055681 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fa78157e-7c85-490d-9b80-acd93eed635c","Type":"ContainerDied","Data":"2b23248db98986df1b8f92ed267bb211820ddf79551d9187c93ea669476919ad"} Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.055703 4981 scope.go:117] "RemoveContainer" containerID="e306725170488486f94936690c78be3715bc13be3480ccdd0b6a0e9692985ce2" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.099277 4981 scope.go:117] "RemoveContainer" containerID="7ad852d862fc125de3c83034e6887d883ac9f6094da0bb58702ac8e616efbca9" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.103811 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.119922 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.128514 4981 scope.go:117] "RemoveContainer" containerID="2a090c5c3b7b6592ca5c03113922ec29b2423a47c9dcee7f31cee4ebd30be48d" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.129800 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:19 crc kubenswrapper[4981]: E0128 15:24:19.130138 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="ceilometer-central-agent" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.130157 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="ceilometer-central-agent" Jan 28 15:24:19 crc kubenswrapper[4981]: E0128 15:24:19.130210 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="ceilometer-notification-agent" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.130218 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="ceilometer-notification-agent" Jan 28 15:24:19 crc kubenswrapper[4981]: E0128 15:24:19.130230 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="sg-core" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.130236 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="sg-core" Jan 28 15:24:19 crc kubenswrapper[4981]: E0128 15:24:19.130243 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="proxy-httpd" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.130248 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="proxy-httpd" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.130404 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="sg-core" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.130421 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="proxy-httpd" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.130436 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="ceilometer-central-agent" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.130444 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" containerName="ceilometer-notification-agent" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.131929 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.143705 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.143786 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.168529 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.169959 4981 scope.go:117] "RemoveContainer" containerID="262a8eda16b99f0931155673a90a88ef0e919702cd5566791d4811e288c06f8b" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.206964 4981 scope.go:117] "RemoveContainer" containerID="e306725170488486f94936690c78be3715bc13be3480ccdd0b6a0e9692985ce2" Jan 28 15:24:19 crc kubenswrapper[4981]: E0128 15:24:19.207529 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e306725170488486f94936690c78be3715bc13be3480ccdd0b6a0e9692985ce2\": container with ID starting with e306725170488486f94936690c78be3715bc13be3480ccdd0b6a0e9692985ce2 not found: ID does not exist" containerID="e306725170488486f94936690c78be3715bc13be3480ccdd0b6a0e9692985ce2" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.207562 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e306725170488486f94936690c78be3715bc13be3480ccdd0b6a0e9692985ce2"} err="failed to get container status \"e306725170488486f94936690c78be3715bc13be3480ccdd0b6a0e9692985ce2\": rpc error: code = NotFound desc = could not find container \"e306725170488486f94936690c78be3715bc13be3480ccdd0b6a0e9692985ce2\": container with ID starting with e306725170488486f94936690c78be3715bc13be3480ccdd0b6a0e9692985ce2 not found: ID does not exist" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.207583 4981 scope.go:117] "RemoveContainer" containerID="7ad852d862fc125de3c83034e6887d883ac9f6094da0bb58702ac8e616efbca9" Jan 28 15:24:19 crc kubenswrapper[4981]: E0128 15:24:19.207951 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad852d862fc125de3c83034e6887d883ac9f6094da0bb58702ac8e616efbca9\": container with ID starting with 7ad852d862fc125de3c83034e6887d883ac9f6094da0bb58702ac8e616efbca9 not found: ID does not exist" containerID="7ad852d862fc125de3c83034e6887d883ac9f6094da0bb58702ac8e616efbca9" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.207970 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad852d862fc125de3c83034e6887d883ac9f6094da0bb58702ac8e616efbca9"} err="failed to get container status \"7ad852d862fc125de3c83034e6887d883ac9f6094da0bb58702ac8e616efbca9\": rpc error: code = NotFound desc = could not find container \"7ad852d862fc125de3c83034e6887d883ac9f6094da0bb58702ac8e616efbca9\": container with ID starting with 7ad852d862fc125de3c83034e6887d883ac9f6094da0bb58702ac8e616efbca9 not found: ID does not exist" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.207984 4981 scope.go:117] "RemoveContainer" containerID="2a090c5c3b7b6592ca5c03113922ec29b2423a47c9dcee7f31cee4ebd30be48d" Jan 28 15:24:19 crc kubenswrapper[4981]: E0128 15:24:19.208294 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a090c5c3b7b6592ca5c03113922ec29b2423a47c9dcee7f31cee4ebd30be48d\": container with ID starting with 2a090c5c3b7b6592ca5c03113922ec29b2423a47c9dcee7f31cee4ebd30be48d not found: ID does not exist" containerID="2a090c5c3b7b6592ca5c03113922ec29b2423a47c9dcee7f31cee4ebd30be48d" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.208342 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a090c5c3b7b6592ca5c03113922ec29b2423a47c9dcee7f31cee4ebd30be48d"} err="failed to get container status \"2a090c5c3b7b6592ca5c03113922ec29b2423a47c9dcee7f31cee4ebd30be48d\": rpc error: code = NotFound desc = could not find container \"2a090c5c3b7b6592ca5c03113922ec29b2423a47c9dcee7f31cee4ebd30be48d\": container with ID starting with 2a090c5c3b7b6592ca5c03113922ec29b2423a47c9dcee7f31cee4ebd30be48d not found: ID does not exist" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.208371 4981 scope.go:117] "RemoveContainer" containerID="262a8eda16b99f0931155673a90a88ef0e919702cd5566791d4811e288c06f8b" Jan 28 15:24:19 crc kubenswrapper[4981]: E0128 15:24:19.208687 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"262a8eda16b99f0931155673a90a88ef0e919702cd5566791d4811e288c06f8b\": container with ID starting with 262a8eda16b99f0931155673a90a88ef0e919702cd5566791d4811e288c06f8b not found: ID does not exist" containerID="262a8eda16b99f0931155673a90a88ef0e919702cd5566791d4811e288c06f8b" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.208721 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"262a8eda16b99f0931155673a90a88ef0e919702cd5566791d4811e288c06f8b"} err="failed to get container status \"262a8eda16b99f0931155673a90a88ef0e919702cd5566791d4811e288c06f8b\": rpc error: code = NotFound desc = could not find container \"262a8eda16b99f0931155673a90a88ef0e919702cd5566791d4811e288c06f8b\": container with ID starting with 262a8eda16b99f0931155673a90a88ef0e919702cd5566791d4811e288c06f8b not found: ID does not exist" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.247050 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.247296 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0daa00-17fb-46ad-90b5-afbc21f83db8-log-httpd\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.247350 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-scripts\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.247380 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9dsv\" (UniqueName: \"kubernetes.io/projected/cd0daa00-17fb-46ad-90b5-afbc21f83db8-kube-api-access-v9dsv\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.247460 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0daa00-17fb-46ad-90b5-afbc21f83db8-run-httpd\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.247509 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-config-data\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.247541 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.351793 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.351993 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0daa00-17fb-46ad-90b5-afbc21f83db8-log-httpd\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.352040 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-scripts\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.352080 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9dsv\" (UniqueName: \"kubernetes.io/projected/cd0daa00-17fb-46ad-90b5-afbc21f83db8-kube-api-access-v9dsv\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.352146 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0daa00-17fb-46ad-90b5-afbc21f83db8-run-httpd\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.352212 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-config-data\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.352254 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.354342 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0daa00-17fb-46ad-90b5-afbc21f83db8-log-httpd\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.354565 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0daa00-17fb-46ad-90b5-afbc21f83db8-run-httpd\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.359300 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.363428 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-config-data\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.365104 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa78157e-7c85-490d-9b80-acd93eed635c" path="/var/lib/kubelet/pods/fa78157e-7c85-490d-9b80-acd93eed635c/volumes" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.371760 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9dsv\" (UniqueName: \"kubernetes.io/projected/cd0daa00-17fb-46ad-90b5-afbc21f83db8-kube-api-access-v9dsv\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.378004 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-scripts\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.389640 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.464924 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.747081 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.899092 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.899204 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.899300 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.900585 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"af8ca17674da28747e2478538c5afcfef139a1d418b13a8e190cf49cebcd62c0"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:24:19 crc kubenswrapper[4981]: I0128 15:24:19.900661 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://af8ca17674da28747e2478538c5afcfef139a1d418b13a8e190cf49cebcd62c0" gracePeriod=600 Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.069963 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="af8ca17674da28747e2478538c5afcfef139a1d418b13a8e190cf49cebcd62c0" exitCode=0 Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.070055 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"af8ca17674da28747e2478538c5afcfef139a1d418b13a8e190cf49cebcd62c0"} Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.070302 4981 scope.go:117] "RemoveContainer" containerID="176dd31ff4b98ab75c0fb5c532e4cb21dde081ab7085a97e6c5485cd5bc31437" Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.071738 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd0daa00-17fb-46ad-90b5-afbc21f83db8","Type":"ContainerStarted","Data":"ab885f46768e0c01cfa95b2203b953d649713e14e363a71160f8ad9004469f17"} Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.494795 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.578161 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-scripts\") pod \"7267f1cd-1602-4397-a6c4-668efb787be4\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.578248 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-combined-ca-bundle\") pod \"7267f1cd-1602-4397-a6c4-668efb787be4\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.578354 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-config-data\") pod \"7267f1cd-1602-4397-a6c4-668efb787be4\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.578398 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq92t\" (UniqueName: \"kubernetes.io/projected/7267f1cd-1602-4397-a6c4-668efb787be4-kube-api-access-pq92t\") pod \"7267f1cd-1602-4397-a6c4-668efb787be4\" (UID: \"7267f1cd-1602-4397-a6c4-668efb787be4\") " Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.583329 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-scripts" (OuterVolumeSpecName: "scripts") pod "7267f1cd-1602-4397-a6c4-668efb787be4" (UID: "7267f1cd-1602-4397-a6c4-668efb787be4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.598807 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7267f1cd-1602-4397-a6c4-668efb787be4-kube-api-access-pq92t" (OuterVolumeSpecName: "kube-api-access-pq92t") pod "7267f1cd-1602-4397-a6c4-668efb787be4" (UID: "7267f1cd-1602-4397-a6c4-668efb787be4"). InnerVolumeSpecName "kube-api-access-pq92t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.605232 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-config-data" (OuterVolumeSpecName: "config-data") pod "7267f1cd-1602-4397-a6c4-668efb787be4" (UID: "7267f1cd-1602-4397-a6c4-668efb787be4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.609058 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7267f1cd-1602-4397-a6c4-668efb787be4" (UID: "7267f1cd-1602-4397-a6c4-668efb787be4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.680957 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq92t\" (UniqueName: \"kubernetes.io/projected/7267f1cd-1602-4397-a6c4-668efb787be4-kube-api-access-pq92t\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.681233 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.681248 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:20 crc kubenswrapper[4981]: I0128 15:24:20.681262 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7267f1cd-1602-4397-a6c4-668efb787be4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.084992 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"56ba4ad0b5731e840644f4808ebff65356aa66806974ca06b90bbbbf62b8740b"} Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.086689 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd0daa00-17fb-46ad-90b5-afbc21f83db8","Type":"ContainerStarted","Data":"23a4613a401863f9e1580bb5dd7a4e2a6aed8a293ee795f7021fd40f7d496876"} Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.088026 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-fpszk" event={"ID":"7267f1cd-1602-4397-a6c4-668efb787be4","Type":"ContainerDied","Data":"ff7e516939881ed079a1d932882326b7c69ef1796bb2dcad9cad0b29b7cf1d93"} Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.088058 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff7e516939881ed079a1d932882326b7c69ef1796bb2dcad9cad0b29b7cf1d93" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.088120 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-fpszk" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.223597 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 15:24:21 crc kubenswrapper[4981]: E0128 15:24:21.224032 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7267f1cd-1602-4397-a6c4-668efb787be4" containerName="nova-cell0-conductor-db-sync" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.224054 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7267f1cd-1602-4397-a6c4-668efb787be4" containerName="nova-cell0-conductor-db-sync" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.224267 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="7267f1cd-1602-4397-a6c4-668efb787be4" containerName="nova-cell0-conductor-db-sync" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.225003 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.233450 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.233624 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nnzdz" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.234079 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.294585 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.294659 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrfwl\" (UniqueName: \"kubernetes.io/projected/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-kube-api-access-qrfwl\") pod \"nova-cell0-conductor-0\" (UID: \"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.295094 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.397618 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.397698 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrfwl\" (UniqueName: \"kubernetes.io/projected/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-kube-api-access-qrfwl\") pod \"nova-cell0-conductor-0\" (UID: \"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.397830 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.403227 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.408734 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.424885 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrfwl\" (UniqueName: \"kubernetes.io/projected/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-kube-api-access-qrfwl\") pod \"nova-cell0-conductor-0\" (UID: \"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:21 crc kubenswrapper[4981]: I0128 15:24:21.579104 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:22 crc kubenswrapper[4981]: I0128 15:24:22.095160 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 15:24:22 crc kubenswrapper[4981]: W0128 15:24:22.100586 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa8b018e_ddae_4682_b60b_ba6d0dfeff1f.slice/crio-39cee783cc4f6de902a43bfb969ab8d3f617c72892e8a7944637b13b9237a068 WatchSource:0}: Error finding container 39cee783cc4f6de902a43bfb969ab8d3f617c72892e8a7944637b13b9237a068: Status 404 returned error can't find the container with id 39cee783cc4f6de902a43bfb969ab8d3f617c72892e8a7944637b13b9237a068 Jan 28 15:24:22 crc kubenswrapper[4981]: I0128 15:24:22.103361 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd0daa00-17fb-46ad-90b5-afbc21f83db8","Type":"ContainerStarted","Data":"31077dd12b8bd046f695576c19d91221acec9d3c3698b2924fff27a29be91fee"} Jan 28 15:24:22 crc kubenswrapper[4981]: I0128 15:24:22.103408 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd0daa00-17fb-46ad-90b5-afbc21f83db8","Type":"ContainerStarted","Data":"94dae92ad99a8d5f41a3a667f48adcff64798cb277fb148fd9b9b3afd9cecd5a"} Jan 28 15:24:23 crc kubenswrapper[4981]: I0128 15:24:23.112570 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f","Type":"ContainerStarted","Data":"c87294e667c998bee7c81f2f51795b7d1105fb7a2a54efeea1aba1f0b3406268"} Jan 28 15:24:23 crc kubenswrapper[4981]: I0128 15:24:23.112890 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f","Type":"ContainerStarted","Data":"39cee783cc4f6de902a43bfb969ab8d3f617c72892e8a7944637b13b9237a068"} Jan 28 15:24:23 crc kubenswrapper[4981]: I0128 15:24:23.113038 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:23 crc kubenswrapper[4981]: I0128 15:24:23.140596 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.140572299 podStartE2EDuration="2.140572299s" podCreationTimestamp="2026-01-28 15:24:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:23.125268128 +0000 UTC m=+1274.577426389" watchObservedRunningTime="2026-01-28 15:24:23.140572299 +0000 UTC m=+1274.592730540" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.035764 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.122766 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd0daa00-17fb-46ad-90b5-afbc21f83db8","Type":"ContainerStarted","Data":"b3e58c9e1c233ff448b50e2b0f5da9c7aab98ec37b015966174afaeb9bc75305"} Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.124269 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.129255 4981 generic.go:334] "Generic (PLEG): container finished" podID="bd5d9602-d2bd-4dfd-9249-41f61260b5eb" containerID="813330d2834717ca327ca40dd470f63cab153b5cf018d494be56de42bc57af41" exitCode=0 Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.130144 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-67cc6fc44d-7stvd" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.130385 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc6fc44d-7stvd" event={"ID":"bd5d9602-d2bd-4dfd-9249-41f61260b5eb","Type":"ContainerDied","Data":"813330d2834717ca327ca40dd470f63cab153b5cf018d494be56de42bc57af41"} Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.130418 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-67cc6fc44d-7stvd" event={"ID":"bd5d9602-d2bd-4dfd-9249-41f61260b5eb","Type":"ContainerDied","Data":"45721bd76663858b588b6ff49c0b0b8a8a08d3adafb2ba6509ff6db0991c6ca4"} Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.130438 4981 scope.go:117] "RemoveContainer" containerID="2e39bb5187129b17753782b936e62c27167ad87bbde6468e0fa65e1fa4917249" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.157631 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.280430143 podStartE2EDuration="5.157611259s" podCreationTimestamp="2026-01-28 15:24:19 +0000 UTC" firstStartedPulling="2026-01-28 15:24:19.75077359 +0000 UTC m=+1271.202931841" lastFinishedPulling="2026-01-28 15:24:23.627954716 +0000 UTC m=+1275.080112957" observedRunningTime="2026-01-28 15:24:24.144342992 +0000 UTC m=+1275.596501273" watchObservedRunningTime="2026-01-28 15:24:24.157611259 +0000 UTC m=+1275.609769510" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.163763 4981 scope.go:117] "RemoveContainer" containerID="813330d2834717ca327ca40dd470f63cab153b5cf018d494be56de42bc57af41" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.174827 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-httpd-config\") pod \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.174894 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nqzl\" (UniqueName: \"kubernetes.io/projected/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-kube-api-access-4nqzl\") pod \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.174989 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-ovndb-tls-certs\") pod \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.175164 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-config\") pod \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.175240 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-combined-ca-bundle\") pod \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\" (UID: \"bd5d9602-d2bd-4dfd-9249-41f61260b5eb\") " Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.181490 4981 scope.go:117] "RemoveContainer" containerID="2e39bb5187129b17753782b936e62c27167ad87bbde6468e0fa65e1fa4917249" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.181573 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "bd5d9602-d2bd-4dfd-9249-41f61260b5eb" (UID: "bd5d9602-d2bd-4dfd-9249-41f61260b5eb"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.181738 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-kube-api-access-4nqzl" (OuterVolumeSpecName: "kube-api-access-4nqzl") pod "bd5d9602-d2bd-4dfd-9249-41f61260b5eb" (UID: "bd5d9602-d2bd-4dfd-9249-41f61260b5eb"). InnerVolumeSpecName "kube-api-access-4nqzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:24 crc kubenswrapper[4981]: E0128 15:24:24.181999 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e39bb5187129b17753782b936e62c27167ad87bbde6468e0fa65e1fa4917249\": container with ID starting with 2e39bb5187129b17753782b936e62c27167ad87bbde6468e0fa65e1fa4917249 not found: ID does not exist" containerID="2e39bb5187129b17753782b936e62c27167ad87bbde6468e0fa65e1fa4917249" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.182032 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e39bb5187129b17753782b936e62c27167ad87bbde6468e0fa65e1fa4917249"} err="failed to get container status \"2e39bb5187129b17753782b936e62c27167ad87bbde6468e0fa65e1fa4917249\": rpc error: code = NotFound desc = could not find container \"2e39bb5187129b17753782b936e62c27167ad87bbde6468e0fa65e1fa4917249\": container with ID starting with 2e39bb5187129b17753782b936e62c27167ad87bbde6468e0fa65e1fa4917249 not found: ID does not exist" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.182055 4981 scope.go:117] "RemoveContainer" containerID="813330d2834717ca327ca40dd470f63cab153b5cf018d494be56de42bc57af41" Jan 28 15:24:24 crc kubenswrapper[4981]: E0128 15:24:24.182484 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"813330d2834717ca327ca40dd470f63cab153b5cf018d494be56de42bc57af41\": container with ID starting with 813330d2834717ca327ca40dd470f63cab153b5cf018d494be56de42bc57af41 not found: ID does not exist" containerID="813330d2834717ca327ca40dd470f63cab153b5cf018d494be56de42bc57af41" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.182523 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"813330d2834717ca327ca40dd470f63cab153b5cf018d494be56de42bc57af41"} err="failed to get container status \"813330d2834717ca327ca40dd470f63cab153b5cf018d494be56de42bc57af41\": rpc error: code = NotFound desc = could not find container \"813330d2834717ca327ca40dd470f63cab153b5cf018d494be56de42bc57af41\": container with ID starting with 813330d2834717ca327ca40dd470f63cab153b5cf018d494be56de42bc57af41 not found: ID does not exist" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.230063 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd5d9602-d2bd-4dfd-9249-41f61260b5eb" (UID: "bd5d9602-d2bd-4dfd-9249-41f61260b5eb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.232001 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-config" (OuterVolumeSpecName: "config") pod "bd5d9602-d2bd-4dfd-9249-41f61260b5eb" (UID: "bd5d9602-d2bd-4dfd-9249-41f61260b5eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.268450 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "bd5d9602-d2bd-4dfd-9249-41f61260b5eb" (UID: "bd5d9602-d2bd-4dfd-9249-41f61260b5eb"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.277989 4981 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.278019 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.278032 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.278045 4981 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.278058 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nqzl\" (UniqueName: \"kubernetes.io/projected/bd5d9602-d2bd-4dfd-9249-41f61260b5eb-kube-api-access-4nqzl\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.468739 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-67cc6fc44d-7stvd"] Jan 28 15:24:24 crc kubenswrapper[4981]: I0128 15:24:24.477329 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-67cc6fc44d-7stvd"] Jan 28 15:24:25 crc kubenswrapper[4981]: I0128 15:24:25.331379 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5d9602-d2bd-4dfd-9249-41f61260b5eb" path="/var/lib/kubelet/pods/bd5d9602-d2bd-4dfd-9249-41f61260b5eb/volumes" Jan 28 15:24:26 crc kubenswrapper[4981]: I0128 15:24:26.629464 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 15:24:26 crc kubenswrapper[4981]: I0128 15:24:26.629989 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="aa8b018e-ddae-4682-b60b-ba6d0dfeff1f" containerName="nova-cell0-conductor-conductor" containerID="cri-o://c87294e667c998bee7c81f2f51795b7d1105fb7a2a54efeea1aba1f0b3406268" gracePeriod=30 Jan 28 15:24:27 crc kubenswrapper[4981]: I0128 15:24:27.630310 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:27 crc kubenswrapper[4981]: I0128 15:24:27.757163 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-config-data\") pod \"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f\" (UID: \"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f\") " Jan 28 15:24:27 crc kubenswrapper[4981]: I0128 15:24:27.757274 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-combined-ca-bundle\") pod \"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f\" (UID: \"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f\") " Jan 28 15:24:27 crc kubenswrapper[4981]: I0128 15:24:27.757471 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrfwl\" (UniqueName: \"kubernetes.io/projected/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-kube-api-access-qrfwl\") pod \"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f\" (UID: \"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f\") " Jan 28 15:24:27 crc kubenswrapper[4981]: I0128 15:24:27.767288 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-kube-api-access-qrfwl" (OuterVolumeSpecName: "kube-api-access-qrfwl") pod "aa8b018e-ddae-4682-b60b-ba6d0dfeff1f" (UID: "aa8b018e-ddae-4682-b60b-ba6d0dfeff1f"). InnerVolumeSpecName "kube-api-access-qrfwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:27 crc kubenswrapper[4981]: I0128 15:24:27.787720 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-config-data" (OuterVolumeSpecName: "config-data") pod "aa8b018e-ddae-4682-b60b-ba6d0dfeff1f" (UID: "aa8b018e-ddae-4682-b60b-ba6d0dfeff1f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:27 crc kubenswrapper[4981]: I0128 15:24:27.796426 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aa8b018e-ddae-4682-b60b-ba6d0dfeff1f" (UID: "aa8b018e-ddae-4682-b60b-ba6d0dfeff1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:27 crc kubenswrapper[4981]: I0128 15:24:27.860327 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:27 crc kubenswrapper[4981]: I0128 15:24:27.860371 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:27 crc kubenswrapper[4981]: I0128 15:24:27.860394 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrfwl\" (UniqueName: \"kubernetes.io/projected/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f-kube-api-access-qrfwl\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.187541 4981 generic.go:334] "Generic (PLEG): container finished" podID="aa8b018e-ddae-4682-b60b-ba6d0dfeff1f" containerID="c87294e667c998bee7c81f2f51795b7d1105fb7a2a54efeea1aba1f0b3406268" exitCode=0 Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.187607 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f","Type":"ContainerDied","Data":"c87294e667c998bee7c81f2f51795b7d1105fb7a2a54efeea1aba1f0b3406268"} Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.187649 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"aa8b018e-ddae-4682-b60b-ba6d0dfeff1f","Type":"ContainerDied","Data":"39cee783cc4f6de902a43bfb969ab8d3f617c72892e8a7944637b13b9237a068"} Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.187678 4981 scope.go:117] "RemoveContainer" containerID="c87294e667c998bee7c81f2f51795b7d1105fb7a2a54efeea1aba1f0b3406268" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.187693 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.237339 4981 scope.go:117] "RemoveContainer" containerID="c87294e667c998bee7c81f2f51795b7d1105fb7a2a54efeea1aba1f0b3406268" Jan 28 15:24:28 crc kubenswrapper[4981]: E0128 15:24:28.238080 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c87294e667c998bee7c81f2f51795b7d1105fb7a2a54efeea1aba1f0b3406268\": container with ID starting with c87294e667c998bee7c81f2f51795b7d1105fb7a2a54efeea1aba1f0b3406268 not found: ID does not exist" containerID="c87294e667c998bee7c81f2f51795b7d1105fb7a2a54efeea1aba1f0b3406268" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.238109 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c87294e667c998bee7c81f2f51795b7d1105fb7a2a54efeea1aba1f0b3406268"} err="failed to get container status \"c87294e667c998bee7c81f2f51795b7d1105fb7a2a54efeea1aba1f0b3406268\": rpc error: code = NotFound desc = could not find container \"c87294e667c998bee7c81f2f51795b7d1105fb7a2a54efeea1aba1f0b3406268\": container with ID starting with c87294e667c998bee7c81f2f51795b7d1105fb7a2a54efeea1aba1f0b3406268 not found: ID does not exist" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.248694 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.271418 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.278269 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 15:24:28 crc kubenswrapper[4981]: E0128 15:24:28.278863 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5d9602-d2bd-4dfd-9249-41f61260b5eb" containerName="neutron-httpd" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.278979 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5d9602-d2bd-4dfd-9249-41f61260b5eb" containerName="neutron-httpd" Jan 28 15:24:28 crc kubenswrapper[4981]: E0128 15:24:28.279068 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa8b018e-ddae-4682-b60b-ba6d0dfeff1f" containerName="nova-cell0-conductor-conductor" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.279153 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa8b018e-ddae-4682-b60b-ba6d0dfeff1f" containerName="nova-cell0-conductor-conductor" Jan 28 15:24:28 crc kubenswrapper[4981]: E0128 15:24:28.279256 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5d9602-d2bd-4dfd-9249-41f61260b5eb" containerName="neutron-api" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.279330 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5d9602-d2bd-4dfd-9249-41f61260b5eb" containerName="neutron-api" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.279642 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa8b018e-ddae-4682-b60b-ba6d0dfeff1f" containerName="nova-cell0-conductor-conductor" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.279740 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5d9602-d2bd-4dfd-9249-41f61260b5eb" containerName="neutron-api" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.279823 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5d9602-d2bd-4dfd-9249-41f61260b5eb" containerName="neutron-httpd" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.280615 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.283534 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nnzdz" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.284000 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.291442 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.483612 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhcxd\" (UniqueName: \"kubernetes.io/projected/f11d89fe-23cc-4fe1-b03e-c3c5e3613280-kube-api-access-fhcxd\") pod \"nova-cell0-conductor-0\" (UID: \"f11d89fe-23cc-4fe1-b03e-c3c5e3613280\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.483907 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f11d89fe-23cc-4fe1-b03e-c3c5e3613280-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f11d89fe-23cc-4fe1-b03e-c3c5e3613280\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.484181 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11d89fe-23cc-4fe1-b03e-c3c5e3613280-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f11d89fe-23cc-4fe1-b03e-c3c5e3613280\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.586623 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhcxd\" (UniqueName: \"kubernetes.io/projected/f11d89fe-23cc-4fe1-b03e-c3c5e3613280-kube-api-access-fhcxd\") pod \"nova-cell0-conductor-0\" (UID: \"f11d89fe-23cc-4fe1-b03e-c3c5e3613280\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.586949 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f11d89fe-23cc-4fe1-b03e-c3c5e3613280-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f11d89fe-23cc-4fe1-b03e-c3c5e3613280\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.587264 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11d89fe-23cc-4fe1-b03e-c3c5e3613280-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f11d89fe-23cc-4fe1-b03e-c3c5e3613280\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.601045 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f11d89fe-23cc-4fe1-b03e-c3c5e3613280-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f11d89fe-23cc-4fe1-b03e-c3c5e3613280\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.604928 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f11d89fe-23cc-4fe1-b03e-c3c5e3613280-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f11d89fe-23cc-4fe1-b03e-c3c5e3613280\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.613341 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhcxd\" (UniqueName: \"kubernetes.io/projected/f11d89fe-23cc-4fe1-b03e-c3c5e3613280-kube-api-access-fhcxd\") pod \"nova-cell0-conductor-0\" (UID: \"f11d89fe-23cc-4fe1-b03e-c3c5e3613280\") " pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.686875 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.687312 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="ceilometer-central-agent" containerID="cri-o://23a4613a401863f9e1580bb5dd7a4e2a6aed8a293ee795f7021fd40f7d496876" gracePeriod=30 Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.687928 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="ceilometer-notification-agent" containerID="cri-o://94dae92ad99a8d5f41a3a667f48adcff64798cb277fb148fd9b9b3afd9cecd5a" gracePeriod=30 Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.687962 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="proxy-httpd" containerID="cri-o://b3e58c9e1c233ff448b50e2b0f5da9c7aab98ec37b015966174afaeb9bc75305" gracePeriod=30 Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.688060 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="sg-core" containerID="cri-o://31077dd12b8bd046f695576c19d91221acec9d3c3698b2924fff27a29be91fee" gracePeriod=30 Jan 28 15:24:28 crc kubenswrapper[4981]: I0128 15:24:28.912136 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.200717 4981 generic.go:334] "Generic (PLEG): container finished" podID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerID="b3e58c9e1c233ff448b50e2b0f5da9c7aab98ec37b015966174afaeb9bc75305" exitCode=0 Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.200921 4981 generic.go:334] "Generic (PLEG): container finished" podID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerID="31077dd12b8bd046f695576c19d91221acec9d3c3698b2924fff27a29be91fee" exitCode=2 Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.200929 4981 generic.go:334] "Generic (PLEG): container finished" podID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerID="23a4613a401863f9e1580bb5dd7a4e2a6aed8a293ee795f7021fd40f7d496876" exitCode=0 Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.200960 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd0daa00-17fb-46ad-90b5-afbc21f83db8","Type":"ContainerDied","Data":"b3e58c9e1c233ff448b50e2b0f5da9c7aab98ec37b015966174afaeb9bc75305"} Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.200983 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd0daa00-17fb-46ad-90b5-afbc21f83db8","Type":"ContainerDied","Data":"31077dd12b8bd046f695576c19d91221acec9d3c3698b2924fff27a29be91fee"} Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.200992 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd0daa00-17fb-46ad-90b5-afbc21f83db8","Type":"ContainerDied","Data":"23a4613a401863f9e1580bb5dd7a4e2a6aed8a293ee795f7021fd40f7d496876"} Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.381300 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa8b018e-ddae-4682-b60b-ba6d0dfeff1f" path="/var/lib/kubelet/pods/aa8b018e-ddae-4682-b60b-ba6d0dfeff1f/volumes" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.406030 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.455577 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.512756 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9dsv\" (UniqueName: \"kubernetes.io/projected/cd0daa00-17fb-46ad-90b5-afbc21f83db8-kube-api-access-v9dsv\") pod \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.512804 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-scripts\") pod \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.512838 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-combined-ca-bundle\") pod \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.512877 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-sg-core-conf-yaml\") pod \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.512914 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-config-data\") pod \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.512944 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0daa00-17fb-46ad-90b5-afbc21f83db8-run-httpd\") pod \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.512991 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0daa00-17fb-46ad-90b5-afbc21f83db8-log-httpd\") pod \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\" (UID: \"cd0daa00-17fb-46ad-90b5-afbc21f83db8\") " Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.513898 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0daa00-17fb-46ad-90b5-afbc21f83db8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cd0daa00-17fb-46ad-90b5-afbc21f83db8" (UID: "cd0daa00-17fb-46ad-90b5-afbc21f83db8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.516849 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd0daa00-17fb-46ad-90b5-afbc21f83db8-kube-api-access-v9dsv" (OuterVolumeSpecName: "kube-api-access-v9dsv") pod "cd0daa00-17fb-46ad-90b5-afbc21f83db8" (UID: "cd0daa00-17fb-46ad-90b5-afbc21f83db8"). InnerVolumeSpecName "kube-api-access-v9dsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.524576 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-scripts" (OuterVolumeSpecName: "scripts") pod "cd0daa00-17fb-46ad-90b5-afbc21f83db8" (UID: "cd0daa00-17fb-46ad-90b5-afbc21f83db8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.525441 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd0daa00-17fb-46ad-90b5-afbc21f83db8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cd0daa00-17fb-46ad-90b5-afbc21f83db8" (UID: "cd0daa00-17fb-46ad-90b5-afbc21f83db8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.568377 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cd0daa00-17fb-46ad-90b5-afbc21f83db8" (UID: "cd0daa00-17fb-46ad-90b5-afbc21f83db8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.616455 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9dsv\" (UniqueName: \"kubernetes.io/projected/cd0daa00-17fb-46ad-90b5-afbc21f83db8-kube-api-access-v9dsv\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.616495 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.616509 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.616520 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0daa00-17fb-46ad-90b5-afbc21f83db8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.616530 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cd0daa00-17fb-46ad-90b5-afbc21f83db8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.744419 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cd0daa00-17fb-46ad-90b5-afbc21f83db8" (UID: "cd0daa00-17fb-46ad-90b5-afbc21f83db8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.766300 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-config-data" (OuterVolumeSpecName: "config-data") pod "cd0daa00-17fb-46ad-90b5-afbc21f83db8" (UID: "cd0daa00-17fb-46ad-90b5-afbc21f83db8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.819712 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:29 crc kubenswrapper[4981]: I0128 15:24:29.819922 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cd0daa00-17fb-46ad-90b5-afbc21f83db8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.215560 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f11d89fe-23cc-4fe1-b03e-c3c5e3613280","Type":"ContainerStarted","Data":"082e05b7c5a7fa3a10426a356c098e64802454d3ac7c4b42f0615589a65cdcdb"} Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.215886 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f11d89fe-23cc-4fe1-b03e-c3c5e3613280","Type":"ContainerStarted","Data":"02412c7cdcf9f449c9d7fe2150402c61029912cf7dee27ac33f3e7d15375041b"} Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.215924 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.221679 4981 generic.go:334] "Generic (PLEG): container finished" podID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerID="94dae92ad99a8d5f41a3a667f48adcff64798cb277fb148fd9b9b3afd9cecd5a" exitCode=0 Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.221745 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd0daa00-17fb-46ad-90b5-afbc21f83db8","Type":"ContainerDied","Data":"94dae92ad99a8d5f41a3a667f48adcff64798cb277fb148fd9b9b3afd9cecd5a"} Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.221781 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cd0daa00-17fb-46ad-90b5-afbc21f83db8","Type":"ContainerDied","Data":"ab885f46768e0c01cfa95b2203b953d649713e14e363a71160f8ad9004469f17"} Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.221816 4981 scope.go:117] "RemoveContainer" containerID="b3e58c9e1c233ff448b50e2b0f5da9c7aab98ec37b015966174afaeb9bc75305" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.221970 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.245626 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.245609637 podStartE2EDuration="2.245609637s" podCreationTimestamp="2026-01-28 15:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:30.234981849 +0000 UTC m=+1281.687140100" watchObservedRunningTime="2026-01-28 15:24:30.245609637 +0000 UTC m=+1281.697767878" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.257789 4981 scope.go:117] "RemoveContainer" containerID="31077dd12b8bd046f695576c19d91221acec9d3c3698b2924fff27a29be91fee" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.282282 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.298464 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.311959 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:30 crc kubenswrapper[4981]: E0128 15:24:30.312421 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="sg-core" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.312439 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="sg-core" Jan 28 15:24:30 crc kubenswrapper[4981]: E0128 15:24:30.312451 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="ceilometer-notification-agent" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.312460 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="ceilometer-notification-agent" Jan 28 15:24:30 crc kubenswrapper[4981]: E0128 15:24:30.312478 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="ceilometer-central-agent" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.312488 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="ceilometer-central-agent" Jan 28 15:24:30 crc kubenswrapper[4981]: E0128 15:24:30.312530 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="proxy-httpd" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.312541 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="proxy-httpd" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.312782 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="ceilometer-central-agent" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.312794 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="proxy-httpd" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.312808 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="sg-core" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.312826 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" containerName="ceilometer-notification-agent" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.314946 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.315752 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.348083 4981 scope.go:117] "RemoveContainer" containerID="94dae92ad99a8d5f41a3a667f48adcff64798cb277fb148fd9b9b3afd9cecd5a" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.349855 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.350544 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.350669 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f75ac87-89fc-4468-abd3-7347faabc1dd-run-httpd\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.350699 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f75ac87-89fc-4468-abd3-7347faabc1dd-log-httpd\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.350735 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-scripts\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.350756 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.350790 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xxm5\" (UniqueName: \"kubernetes.io/projected/3f75ac87-89fc-4468-abd3-7347faabc1dd-kube-api-access-8xxm5\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.350815 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-config-data\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.350887 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 15:24:30 crc kubenswrapper[4981]: E0128 15:24:30.353421 4981 kuberuntime_gc.go:389] "Failed to remove container log dead symlink" err="remove /var/log/containers/ceilometer-0_openstack_ceilometer-notification-agent-94dae92ad99a8d5f41a3a667f48adcff64798cb277fb148fd9b9b3afd9cecd5a.log: no such file or directory" path="/var/log/containers/ceilometer-0_openstack_ceilometer-notification-agent-94dae92ad99a8d5f41a3a667f48adcff64798cb277fb148fd9b9b3afd9cecd5a.log" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.391636 4981 scope.go:117] "RemoveContainer" containerID="23a4613a401863f9e1580bb5dd7a4e2a6aed8a293ee795f7021fd40f7d496876" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.420624 4981 scope.go:117] "RemoveContainer" containerID="b3e58c9e1c233ff448b50e2b0f5da9c7aab98ec37b015966174afaeb9bc75305" Jan 28 15:24:30 crc kubenswrapper[4981]: E0128 15:24:30.421006 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3e58c9e1c233ff448b50e2b0f5da9c7aab98ec37b015966174afaeb9bc75305\": container with ID starting with b3e58c9e1c233ff448b50e2b0f5da9c7aab98ec37b015966174afaeb9bc75305 not found: ID does not exist" containerID="b3e58c9e1c233ff448b50e2b0f5da9c7aab98ec37b015966174afaeb9bc75305" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.421036 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3e58c9e1c233ff448b50e2b0f5da9c7aab98ec37b015966174afaeb9bc75305"} err="failed to get container status \"b3e58c9e1c233ff448b50e2b0f5da9c7aab98ec37b015966174afaeb9bc75305\": rpc error: code = NotFound desc = could not find container \"b3e58c9e1c233ff448b50e2b0f5da9c7aab98ec37b015966174afaeb9bc75305\": container with ID starting with b3e58c9e1c233ff448b50e2b0f5da9c7aab98ec37b015966174afaeb9bc75305 not found: ID does not exist" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.421058 4981 scope.go:117] "RemoveContainer" containerID="31077dd12b8bd046f695576c19d91221acec9d3c3698b2924fff27a29be91fee" Jan 28 15:24:30 crc kubenswrapper[4981]: E0128 15:24:30.421544 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31077dd12b8bd046f695576c19d91221acec9d3c3698b2924fff27a29be91fee\": container with ID starting with 31077dd12b8bd046f695576c19d91221acec9d3c3698b2924fff27a29be91fee not found: ID does not exist" containerID="31077dd12b8bd046f695576c19d91221acec9d3c3698b2924fff27a29be91fee" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.421563 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31077dd12b8bd046f695576c19d91221acec9d3c3698b2924fff27a29be91fee"} err="failed to get container status \"31077dd12b8bd046f695576c19d91221acec9d3c3698b2924fff27a29be91fee\": rpc error: code = NotFound desc = could not find container \"31077dd12b8bd046f695576c19d91221acec9d3c3698b2924fff27a29be91fee\": container with ID starting with 31077dd12b8bd046f695576c19d91221acec9d3c3698b2924fff27a29be91fee not found: ID does not exist" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.421576 4981 scope.go:117] "RemoveContainer" containerID="94dae92ad99a8d5f41a3a667f48adcff64798cb277fb148fd9b9b3afd9cecd5a" Jan 28 15:24:30 crc kubenswrapper[4981]: E0128 15:24:30.422054 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94dae92ad99a8d5f41a3a667f48adcff64798cb277fb148fd9b9b3afd9cecd5a\": container with ID starting with 94dae92ad99a8d5f41a3a667f48adcff64798cb277fb148fd9b9b3afd9cecd5a not found: ID does not exist" containerID="94dae92ad99a8d5f41a3a667f48adcff64798cb277fb148fd9b9b3afd9cecd5a" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.422072 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94dae92ad99a8d5f41a3a667f48adcff64798cb277fb148fd9b9b3afd9cecd5a"} err="failed to get container status \"94dae92ad99a8d5f41a3a667f48adcff64798cb277fb148fd9b9b3afd9cecd5a\": rpc error: code = NotFound desc = could not find container \"94dae92ad99a8d5f41a3a667f48adcff64798cb277fb148fd9b9b3afd9cecd5a\": container with ID starting with 94dae92ad99a8d5f41a3a667f48adcff64798cb277fb148fd9b9b3afd9cecd5a not found: ID does not exist" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.422083 4981 scope.go:117] "RemoveContainer" containerID="23a4613a401863f9e1580bb5dd7a4e2a6aed8a293ee795f7021fd40f7d496876" Jan 28 15:24:30 crc kubenswrapper[4981]: E0128 15:24:30.422376 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23a4613a401863f9e1580bb5dd7a4e2a6aed8a293ee795f7021fd40f7d496876\": container with ID starting with 23a4613a401863f9e1580bb5dd7a4e2a6aed8a293ee795f7021fd40f7d496876 not found: ID does not exist" containerID="23a4613a401863f9e1580bb5dd7a4e2a6aed8a293ee795f7021fd40f7d496876" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.422398 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23a4613a401863f9e1580bb5dd7a4e2a6aed8a293ee795f7021fd40f7d496876"} err="failed to get container status \"23a4613a401863f9e1580bb5dd7a4e2a6aed8a293ee795f7021fd40f7d496876\": rpc error: code = NotFound desc = could not find container \"23a4613a401863f9e1580bb5dd7a4e2a6aed8a293ee795f7021fd40f7d496876\": container with ID starting with 23a4613a401863f9e1580bb5dd7a4e2a6aed8a293ee795f7021fd40f7d496876 not found: ID does not exist" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.452176 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f75ac87-89fc-4468-abd3-7347faabc1dd-run-httpd\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.452230 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f75ac87-89fc-4468-abd3-7347faabc1dd-log-httpd\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.452267 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-scripts\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.452302 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.452328 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xxm5\" (UniqueName: \"kubernetes.io/projected/3f75ac87-89fc-4468-abd3-7347faabc1dd-kube-api-access-8xxm5\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.452364 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-config-data\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.452463 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.455236 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f75ac87-89fc-4468-abd3-7347faabc1dd-run-httpd\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.455307 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f75ac87-89fc-4468-abd3-7347faabc1dd-log-httpd\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.456740 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-scripts\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.457250 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.457474 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-config-data\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.463496 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.471679 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xxm5\" (UniqueName: \"kubernetes.io/projected/3f75ac87-89fc-4468-abd3-7347faabc1dd-kube-api-access-8xxm5\") pod \"ceilometer-0\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " pod="openstack/ceilometer-0" Jan 28 15:24:30 crc kubenswrapper[4981]: I0128 15:24:30.677035 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:24:31 crc kubenswrapper[4981]: I0128 15:24:31.113135 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:24:31 crc kubenswrapper[4981]: W0128 15:24:31.119620 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f75ac87_89fc_4468_abd3_7347faabc1dd.slice/crio-a4bd9f859ee9fb3759f15b0498ad78ec210436847c10286c62f03762873446e5 WatchSource:0}: Error finding container a4bd9f859ee9fb3759f15b0498ad78ec210436847c10286c62f03762873446e5: Status 404 returned error can't find the container with id a4bd9f859ee9fb3759f15b0498ad78ec210436847c10286c62f03762873446e5 Jan 28 15:24:31 crc kubenswrapper[4981]: I0128 15:24:31.229469 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f75ac87-89fc-4468-abd3-7347faabc1dd","Type":"ContainerStarted","Data":"a4bd9f859ee9fb3759f15b0498ad78ec210436847c10286c62f03762873446e5"} Jan 28 15:24:31 crc kubenswrapper[4981]: I0128 15:24:31.328820 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd0daa00-17fb-46ad-90b5-afbc21f83db8" path="/var/lib/kubelet/pods/cd0daa00-17fb-46ad-90b5-afbc21f83db8/volumes" Jan 28 15:24:32 crc kubenswrapper[4981]: I0128 15:24:32.239926 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f75ac87-89fc-4468-abd3-7347faabc1dd","Type":"ContainerStarted","Data":"a792d74426a84d1e8b782eb765fb9849d992317e6a398b23264f64a37feca00d"} Jan 28 15:24:33 crc kubenswrapper[4981]: I0128 15:24:33.252412 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f75ac87-89fc-4468-abd3-7347faabc1dd","Type":"ContainerStarted","Data":"45a1d31fe914071bf1e29dd9a450d7aa3a7ff8eae89c38ea213cefbeb7ea8e1b"} Jan 28 15:24:34 crc kubenswrapper[4981]: I0128 15:24:34.269430 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f75ac87-89fc-4468-abd3-7347faabc1dd","Type":"ContainerStarted","Data":"c837b985a29893346264aa885325b28d426d273e03c56a877d6622d21ebb6394"} Jan 28 15:24:36 crc kubenswrapper[4981]: I0128 15:24:36.287717 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f75ac87-89fc-4468-abd3-7347faabc1dd","Type":"ContainerStarted","Data":"3a0f9b90f6d8855e669c270fa1947f5460d71bada5972eb0e994890c1823d862"} Jan 28 15:24:36 crc kubenswrapper[4981]: I0128 15:24:36.288136 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 15:24:36 crc kubenswrapper[4981]: I0128 15:24:36.318025 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.319607541 podStartE2EDuration="6.318001707s" podCreationTimestamp="2026-01-28 15:24:30 +0000 UTC" firstStartedPulling="2026-01-28 15:24:31.121463225 +0000 UTC m=+1282.573621476" lastFinishedPulling="2026-01-28 15:24:35.119857401 +0000 UTC m=+1286.572015642" observedRunningTime="2026-01-28 15:24:36.304403991 +0000 UTC m=+1287.756562262" watchObservedRunningTime="2026-01-28 15:24:36.318001707 +0000 UTC m=+1287.770159958" Jan 28 15:24:38 crc kubenswrapper[4981]: I0128 15:24:38.961274 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.497658 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-w4dpt"] Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.498683 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.500727 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.505062 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.510956 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-w4dpt"] Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.613555 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-scripts\") pod \"nova-cell0-cell-mapping-w4dpt\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.613607 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngbss\" (UniqueName: \"kubernetes.io/projected/61d4eb9c-3032-485a-9c38-983eae66cbc8-kube-api-access-ngbss\") pod \"nova-cell0-cell-mapping-w4dpt\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.613651 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-w4dpt\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.613897 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-config-data\") pod \"nova-cell0-cell-mapping-w4dpt\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.652181 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.653813 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.657151 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.663740 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.665132 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.669086 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.686961 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.721487 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-config-data\") pod \"nova-cell0-cell-mapping-w4dpt\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.721598 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-scripts\") pod \"nova-cell0-cell-mapping-w4dpt\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.721628 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngbss\" (UniqueName: \"kubernetes.io/projected/61d4eb9c-3032-485a-9c38-983eae66cbc8-kube-api-access-ngbss\") pod \"nova-cell0-cell-mapping-w4dpt\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.721675 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-w4dpt\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.731676 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.746044 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-config-data\") pod \"nova-cell0-cell-mapping-w4dpt\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.754161 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-scripts\") pod \"nova-cell0-cell-mapping-w4dpt\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.763505 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-w4dpt\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.767294 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngbss\" (UniqueName: \"kubernetes.io/projected/61d4eb9c-3032-485a-9c38-983eae66cbc8-kube-api-access-ngbss\") pod \"nova-cell0-cell-mapping-w4dpt\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.815559 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.817028 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.819637 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.823640 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.823937 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.824033 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " pod="openstack/nova-api-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.824081 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-config-data\") pod \"nova-api-0\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " pod="openstack/nova-api-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.824253 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpftv\" (UniqueName: \"kubernetes.io/projected/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-kube-api-access-hpftv\") pod \"nova-api-0\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " pod="openstack/nova-api-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.824330 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9nbr\" (UniqueName: \"kubernetes.io/projected/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-kube-api-access-x9nbr\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.824365 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-logs\") pod \"nova-api-0\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " pod="openstack/nova-api-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.842398 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.889281 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.898102 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.902290 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.909615 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.916292 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.926278 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.926341 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.926388 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " pod="openstack/nova-api-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.926425 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-config-data\") pod \"nova-api-0\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " pod="openstack/nova-api-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.926457 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqwss\" (UniqueName: \"kubernetes.io/projected/26c2d4af-5bbe-4a1a-a225-3a76eff41226-kube-api-access-zqwss\") pod \"nova-scheduler-0\" (UID: \"26c2d4af-5bbe-4a1a-a225-3a76eff41226\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.926479 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c2d4af-5bbe-4a1a-a225-3a76eff41226-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"26c2d4af-5bbe-4a1a-a225-3a76eff41226\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.926516 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c2d4af-5bbe-4a1a-a225-3a76eff41226-config-data\") pod \"nova-scheduler-0\" (UID: \"26c2d4af-5bbe-4a1a-a225-3a76eff41226\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.926556 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpftv\" (UniqueName: \"kubernetes.io/projected/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-kube-api-access-hpftv\") pod \"nova-api-0\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " pod="openstack/nova-api-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.926584 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9nbr\" (UniqueName: \"kubernetes.io/projected/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-kube-api-access-x9nbr\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.926605 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-logs\") pod \"nova-api-0\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " pod="openstack/nova-api-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.927029 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-logs\") pod \"nova-api-0\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " pod="openstack/nova-api-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.931945 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " pod="openstack/nova-api-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.937864 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.941388 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lwm9x"] Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.942878 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.959395 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-config-data\") pod \"nova-api-0\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " pod="openstack/nova-api-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.959516 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.960088 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lwm9x"] Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.971390 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9nbr\" (UniqueName: \"kubernetes.io/projected/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-kube-api-access-x9nbr\") pod \"nova-cell1-novncproxy-0\" (UID: \"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.977095 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpftv\" (UniqueName: \"kubernetes.io/projected/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-kube-api-access-hpftv\") pod \"nova-api-0\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " pod="openstack/nova-api-0" Jan 28 15:24:39 crc kubenswrapper[4981]: I0128 15:24:39.985474 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.004589 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.028489 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e5013b-9327-47dc-ac6f-1e749b59ca64-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " pod="openstack/nova-metadata-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.028565 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e5013b-9327-47dc-ac6f-1e749b59ca64-config-data\") pod \"nova-metadata-0\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " pod="openstack/nova-metadata-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.028612 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqwss\" (UniqueName: \"kubernetes.io/projected/26c2d4af-5bbe-4a1a-a225-3a76eff41226-kube-api-access-zqwss\") pod \"nova-scheduler-0\" (UID: \"26c2d4af-5bbe-4a1a-a225-3a76eff41226\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.028628 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9e5013b-9327-47dc-ac6f-1e749b59ca64-logs\") pod \"nova-metadata-0\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " pod="openstack/nova-metadata-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.028650 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c2d4af-5bbe-4a1a-a225-3a76eff41226-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"26c2d4af-5bbe-4a1a-a225-3a76eff41226\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.028673 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97gsw\" (UniqueName: \"kubernetes.io/projected/f9e5013b-9327-47dc-ac6f-1e749b59ca64-kube-api-access-97gsw\") pod \"nova-metadata-0\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " pod="openstack/nova-metadata-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.028702 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c2d4af-5bbe-4a1a-a225-3a76eff41226-config-data\") pod \"nova-scheduler-0\" (UID: \"26c2d4af-5bbe-4a1a-a225-3a76eff41226\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.037458 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c2d4af-5bbe-4a1a-a225-3a76eff41226-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"26c2d4af-5bbe-4a1a-a225-3a76eff41226\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.039484 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c2d4af-5bbe-4a1a-a225-3a76eff41226-config-data\") pod \"nova-scheduler-0\" (UID: \"26c2d4af-5bbe-4a1a-a225-3a76eff41226\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.045872 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqwss\" (UniqueName: \"kubernetes.io/projected/26c2d4af-5bbe-4a1a-a225-3a76eff41226-kube-api-access-zqwss\") pod \"nova-scheduler-0\" (UID: \"26c2d4af-5bbe-4a1a-a225-3a76eff41226\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.129891 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.129938 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9e5013b-9327-47dc-ac6f-1e749b59ca64-logs\") pod \"nova-metadata-0\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " pod="openstack/nova-metadata-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.129981 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97gsw\" (UniqueName: \"kubernetes.io/projected/f9e5013b-9327-47dc-ac6f-1e749b59ca64-kube-api-access-97gsw\") pod \"nova-metadata-0\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " pod="openstack/nova-metadata-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.130050 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.130087 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.130147 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-config\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.130193 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.130238 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e5013b-9327-47dc-ac6f-1e749b59ca64-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " pod="openstack/nova-metadata-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.130287 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58rx7\" (UniqueName: \"kubernetes.io/projected/27c4ebb6-cd4b-4021-a139-49536ce42763-kube-api-access-58rx7\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.130342 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e5013b-9327-47dc-ac6f-1e749b59ca64-config-data\") pod \"nova-metadata-0\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " pod="openstack/nova-metadata-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.131416 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9e5013b-9327-47dc-ac6f-1e749b59ca64-logs\") pod \"nova-metadata-0\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " pod="openstack/nova-metadata-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.136845 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e5013b-9327-47dc-ac6f-1e749b59ca64-config-data\") pod \"nova-metadata-0\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " pod="openstack/nova-metadata-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.151404 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e5013b-9327-47dc-ac6f-1e749b59ca64-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " pod="openstack/nova-metadata-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.156223 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97gsw\" (UniqueName: \"kubernetes.io/projected/f9e5013b-9327-47dc-ac6f-1e749b59ca64-kube-api-access-97gsw\") pod \"nova-metadata-0\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " pod="openstack/nova-metadata-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.175492 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.234429 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.234467 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.234511 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-config\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.234540 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.234572 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58rx7\" (UniqueName: \"kubernetes.io/projected/27c4ebb6-cd4b-4021-a139-49536ce42763-kube-api-access-58rx7\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.234630 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.240400 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-dns-svc\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.250025 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-dns-swift-storage-0\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.250264 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-config\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.254695 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-ovsdbserver-nb\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.257891 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-ovsdbserver-sb\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.274223 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58rx7\" (UniqueName: \"kubernetes.io/projected/27c4ebb6-cd4b-4021-a139-49536ce42763-kube-api-access-58rx7\") pod \"dnsmasq-dns-845d6d6f59-lwm9x\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.316230 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.339433 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.570810 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.579777 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.670096 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fnttx"] Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.671346 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.676705 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.676852 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.682853 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fnttx"] Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.740241 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-w4dpt"] Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.747448 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bntc8\" (UniqueName: \"kubernetes.io/projected/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-kube-api-access-bntc8\") pod \"nova-cell1-conductor-db-sync-fnttx\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.747505 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fnttx\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.747556 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-config-data\") pod \"nova-cell1-conductor-db-sync-fnttx\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.747958 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-scripts\") pod \"nova-cell1-conductor-db-sync-fnttx\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.852628 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bntc8\" (UniqueName: \"kubernetes.io/projected/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-kube-api-access-bntc8\") pod \"nova-cell1-conductor-db-sync-fnttx\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.853617 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fnttx\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.853742 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-config-data\") pod \"nova-cell1-conductor-db-sync-fnttx\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.853935 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-scripts\") pod \"nova-cell1-conductor-db-sync-fnttx\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.873734 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-scripts\") pod \"nova-cell1-conductor-db-sync-fnttx\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.874291 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fnttx\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.881924 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bntc8\" (UniqueName: \"kubernetes.io/projected/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-kube-api-access-bntc8\") pod \"nova-cell1-conductor-db-sync-fnttx\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.886924 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-config-data\") pod \"nova-cell1-conductor-db-sync-fnttx\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:40 crc kubenswrapper[4981]: I0128 15:24:40.893983 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 15:24:41 crc kubenswrapper[4981]: I0128 15:24:41.024667 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lwm9x"] Jan 28 15:24:41 crc kubenswrapper[4981]: I0128 15:24:41.057477 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:41 crc kubenswrapper[4981]: I0128 15:24:41.119335 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 15:24:41 crc kubenswrapper[4981]: I0128 15:24:41.338380 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f9e5013b-9327-47dc-ac6f-1e749b59ca64","Type":"ContainerStarted","Data":"42b7783a957cf30c9fc619d2fd27b790a87f1bc2d18ad030db7f1edb23cc47dc"} Jan 28 15:24:41 crc kubenswrapper[4981]: I0128 15:24:41.339507 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" event={"ID":"27c4ebb6-cd4b-4021-a139-49536ce42763","Type":"ContainerStarted","Data":"a4e546bc0330859c7f8be57b2689c67ffc9af1f1f7c572dcfbfb13cd0eb265df"} Jan 28 15:24:41 crc kubenswrapper[4981]: I0128 15:24:41.340486 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a","Type":"ContainerStarted","Data":"fe2925d84a934d725cc07f3a466301c33f302df20066bd645e510815f039353a"} Jan 28 15:24:41 crc kubenswrapper[4981]: I0128 15:24:41.344490 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55","Type":"ContainerStarted","Data":"c65f59ddaa528a162cd24a2beb0a0fb364275409b8ad0d7da4108834e9270395"} Jan 28 15:24:41 crc kubenswrapper[4981]: I0128 15:24:41.345718 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-w4dpt" event={"ID":"61d4eb9c-3032-485a-9c38-983eae66cbc8","Type":"ContainerStarted","Data":"f676ab8370dce7cc95748402e276fe5dd52680f6df0fd6a3dffdece504da8123"} Jan 28 15:24:41 crc kubenswrapper[4981]: I0128 15:24:41.345750 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-w4dpt" event={"ID":"61d4eb9c-3032-485a-9c38-983eae66cbc8","Type":"ContainerStarted","Data":"452b566c9258e1fefe63872d77f7c9448fd4db9d863cc7d18264bab5e8e143c5"} Jan 28 15:24:41 crc kubenswrapper[4981]: I0128 15:24:41.347614 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"26c2d4af-5bbe-4a1a-a225-3a76eff41226","Type":"ContainerStarted","Data":"2f2c3af610369c8a196431212fca64467c25cac5d9220155adf93307df76f1b5"} Jan 28 15:24:41 crc kubenswrapper[4981]: I0128 15:24:41.656045 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fnttx"] Jan 28 15:24:41 crc kubenswrapper[4981]: W0128 15:24:41.662356 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda50e2ce1_a7ce_4b26_b97d_6823b74cd974.slice/crio-db50b83312c4b952954e35061170cd3032c684284e7dabeda5f0269b064217c2 WatchSource:0}: Error finding container db50b83312c4b952954e35061170cd3032c684284e7dabeda5f0269b064217c2: Status 404 returned error can't find the container with id db50b83312c4b952954e35061170cd3032c684284e7dabeda5f0269b064217c2 Jan 28 15:24:42 crc kubenswrapper[4981]: I0128 15:24:42.357621 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fnttx" event={"ID":"a50e2ce1-a7ce-4b26-b97d-6823b74cd974","Type":"ContainerStarted","Data":"95e7cb789aaeb7fab4347ecc141e8577533d007fc8ea5900d99242fc2942f77d"} Jan 28 15:24:42 crc kubenswrapper[4981]: I0128 15:24:42.358205 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fnttx" event={"ID":"a50e2ce1-a7ce-4b26-b97d-6823b74cd974","Type":"ContainerStarted","Data":"db50b83312c4b952954e35061170cd3032c684284e7dabeda5f0269b064217c2"} Jan 28 15:24:42 crc kubenswrapper[4981]: I0128 15:24:42.359074 4981 generic.go:334] "Generic (PLEG): container finished" podID="27c4ebb6-cd4b-4021-a139-49536ce42763" containerID="35351ce6ec954a2c80601589a7bd641a5a6fb215cbe11640cc8473d570dfab66" exitCode=0 Jan 28 15:24:42 crc kubenswrapper[4981]: I0128 15:24:42.359177 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" event={"ID":"27c4ebb6-cd4b-4021-a139-49536ce42763","Type":"ContainerDied","Data":"35351ce6ec954a2c80601589a7bd641a5a6fb215cbe11640cc8473d570dfab66"} Jan 28 15:24:42 crc kubenswrapper[4981]: I0128 15:24:42.418988 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-w4dpt" podStartSLOduration=3.418965345 podStartE2EDuration="3.418965345s" podCreationTimestamp="2026-01-28 15:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:42.418054271 +0000 UTC m=+1293.870212542" watchObservedRunningTime="2026-01-28 15:24:42.418965345 +0000 UTC m=+1293.871123576" Jan 28 15:24:43 crc kubenswrapper[4981]: I0128 15:24:43.245552 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 15:24:43 crc kubenswrapper[4981]: I0128 15:24:43.258161 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 15:24:43 crc kubenswrapper[4981]: I0128 15:24:43.382590 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" event={"ID":"27c4ebb6-cd4b-4021-a139-49536ce42763","Type":"ContainerStarted","Data":"4915f6b2b72ae1a6f2341636c6e8f15fa9d9817eb38b3630bcf8a483706cb17f"} Jan 28 15:24:43 crc kubenswrapper[4981]: I0128 15:24:43.406270 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-fnttx" podStartSLOduration=3.406252293 podStartE2EDuration="3.406252293s" podCreationTimestamp="2026-01-28 15:24:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:43.400389606 +0000 UTC m=+1294.852547857" watchObservedRunningTime="2026-01-28 15:24:43.406252293 +0000 UTC m=+1294.858410534" Jan 28 15:24:44 crc kubenswrapper[4981]: I0128 15:24:44.392248 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:44 crc kubenswrapper[4981]: I0128 15:24:44.419422 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" podStartSLOduration=5.419405018 podStartE2EDuration="5.419405018s" podCreationTimestamp="2026-01-28 15:24:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:44.414961989 +0000 UTC m=+1295.867120230" watchObservedRunningTime="2026-01-28 15:24:44.419405018 +0000 UTC m=+1295.871563249" Jan 28 15:24:45 crc kubenswrapper[4981]: I0128 15:24:45.402232 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a","Type":"ContainerStarted","Data":"2f1ab23bf744bd7935a340c3b805c6ecbf39887e8d7de3528560f394075d222d"} Jan 28 15:24:45 crc kubenswrapper[4981]: I0128 15:24:45.402670 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a","Type":"ContainerStarted","Data":"e70a822de1ba9d0ca0280767a1e06cac1c48fa614aa01c6d2013b0279f02dfd6"} Jan 28 15:24:45 crc kubenswrapper[4981]: I0128 15:24:45.407565 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55","Type":"ContainerStarted","Data":"f6f6c593ee94516385a1845445ae913d0e788745620c2357b1ef2fd9813f422e"} Jan 28 15:24:45 crc kubenswrapper[4981]: I0128 15:24:45.407679 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="dc7f2b64-003f-47e8-a3b0-c29cb1c47f55" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://f6f6c593ee94516385a1845445ae913d0e788745620c2357b1ef2fd9813f422e" gracePeriod=30 Jan 28 15:24:45 crc kubenswrapper[4981]: I0128 15:24:45.410851 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"26c2d4af-5bbe-4a1a-a225-3a76eff41226","Type":"ContainerStarted","Data":"34c8accf0ec24a34eea54a5596df6cc00f44218f176c1c7cff06246884a43fa0"} Jan 28 15:24:45 crc kubenswrapper[4981]: I0128 15:24:45.414149 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f9e5013b-9327-47dc-ac6f-1e749b59ca64","Type":"ContainerStarted","Data":"a52085f7bca4e361d587761cb7c7e82631dac1c8cf69167d22887f4f91c3b846"} Jan 28 15:24:45 crc kubenswrapper[4981]: I0128 15:24:45.414213 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f9e5013b-9327-47dc-ac6f-1e749b59ca64","Type":"ContainerStarted","Data":"b485a9f4c2b1128c44db139fb8d9f39300c751028eb9da0b518c5ae67762d4f5"} Jan 28 15:24:45 crc kubenswrapper[4981]: I0128 15:24:45.414499 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f9e5013b-9327-47dc-ac6f-1e749b59ca64" containerName="nova-metadata-log" containerID="cri-o://b485a9f4c2b1128c44db139fb8d9f39300c751028eb9da0b518c5ae67762d4f5" gracePeriod=30 Jan 28 15:24:45 crc kubenswrapper[4981]: I0128 15:24:45.414547 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f9e5013b-9327-47dc-ac6f-1e749b59ca64" containerName="nova-metadata-metadata" containerID="cri-o://a52085f7bca4e361d587761cb7c7e82631dac1c8cf69167d22887f4f91c3b846" gracePeriod=30 Jan 28 15:24:45 crc kubenswrapper[4981]: I0128 15:24:45.433597 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.348006886 podStartE2EDuration="6.433567641s" podCreationTimestamp="2026-01-28 15:24:39 +0000 UTC" firstStartedPulling="2026-01-28 15:24:40.597172317 +0000 UTC m=+1292.049330558" lastFinishedPulling="2026-01-28 15:24:44.682733072 +0000 UTC m=+1296.134891313" observedRunningTime="2026-01-28 15:24:45.421595028 +0000 UTC m=+1296.873753289" watchObservedRunningTime="2026-01-28 15:24:45.433567641 +0000 UTC m=+1296.885725922" Jan 28 15:24:45 crc kubenswrapper[4981]: I0128 15:24:45.445397 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.3468199419999998 podStartE2EDuration="6.445381038s" podCreationTimestamp="2026-01-28 15:24:39 +0000 UTC" firstStartedPulling="2026-01-28 15:24:40.583590561 +0000 UTC m=+1292.035748792" lastFinishedPulling="2026-01-28 15:24:44.682151647 +0000 UTC m=+1296.134309888" observedRunningTime="2026-01-28 15:24:45.436311494 +0000 UTC m=+1296.888469735" watchObservedRunningTime="2026-01-28 15:24:45.445381038 +0000 UTC m=+1296.897539279" Jan 28 15:24:45 crc kubenswrapper[4981]: I0128 15:24:45.458440 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.866723814 podStartE2EDuration="6.458423029s" podCreationTimestamp="2026-01-28 15:24:39 +0000 UTC" firstStartedPulling="2026-01-28 15:24:41.09968445 +0000 UTC m=+1292.551842691" lastFinishedPulling="2026-01-28 15:24:44.691383665 +0000 UTC m=+1296.143541906" observedRunningTime="2026-01-28 15:24:45.457555406 +0000 UTC m=+1296.909713647" watchObservedRunningTime="2026-01-28 15:24:45.458423029 +0000 UTC m=+1296.910581270" Jan 28 15:24:45 crc kubenswrapper[4981]: I0128 15:24:45.481477 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6847082970000002 podStartE2EDuration="6.481453569s" podCreationTimestamp="2026-01-28 15:24:39 +0000 UTC" firstStartedPulling="2026-01-28 15:24:40.887179332 +0000 UTC m=+1292.339337593" lastFinishedPulling="2026-01-28 15:24:44.683924624 +0000 UTC m=+1296.136082865" observedRunningTime="2026-01-28 15:24:45.471441629 +0000 UTC m=+1296.923599890" watchObservedRunningTime="2026-01-28 15:24:45.481453569 +0000 UTC m=+1296.933611810" Jan 28 15:24:46 crc kubenswrapper[4981]: I0128 15:24:46.426796 4981 generic.go:334] "Generic (PLEG): container finished" podID="f9e5013b-9327-47dc-ac6f-1e749b59ca64" containerID="b485a9f4c2b1128c44db139fb8d9f39300c751028eb9da0b518c5ae67762d4f5" exitCode=143 Jan 28 15:24:46 crc kubenswrapper[4981]: I0128 15:24:46.427990 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f9e5013b-9327-47dc-ac6f-1e749b59ca64","Type":"ContainerDied","Data":"b485a9f4c2b1128c44db139fb8d9f39300c751028eb9da0b518c5ae67762d4f5"} Jan 28 15:24:49 crc kubenswrapper[4981]: I0128 15:24:49.469596 4981 generic.go:334] "Generic (PLEG): container finished" podID="61d4eb9c-3032-485a-9c38-983eae66cbc8" containerID="f676ab8370dce7cc95748402e276fe5dd52680f6df0fd6a3dffdece504da8123" exitCode=0 Jan 28 15:24:49 crc kubenswrapper[4981]: I0128 15:24:49.469686 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-w4dpt" event={"ID":"61d4eb9c-3032-485a-9c38-983eae66cbc8","Type":"ContainerDied","Data":"f676ab8370dce7cc95748402e276fe5dd52680f6df0fd6a3dffdece504da8123"} Jan 28 15:24:49 crc kubenswrapper[4981]: I0128 15:24:49.988047 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:24:50 crc kubenswrapper[4981]: I0128 15:24:50.006384 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 15:24:50 crc kubenswrapper[4981]: I0128 15:24:50.006453 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 15:24:50 crc kubenswrapper[4981]: I0128 15:24:50.175979 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 15:24:50 crc kubenswrapper[4981]: I0128 15:24:50.176024 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 15:24:50 crc kubenswrapper[4981]: I0128 15:24:50.214576 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 15:24:50 crc kubenswrapper[4981]: I0128 15:24:50.316788 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 15:24:50 crc kubenswrapper[4981]: I0128 15:24:50.317106 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 15:24:50 crc kubenswrapper[4981]: I0128 15:24:50.341360 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:24:50 crc kubenswrapper[4981]: I0128 15:24:50.431617 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-z7d4r"] Jan 28 15:24:50 crc kubenswrapper[4981]: I0128 15:24:50.431966 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" podUID="331416fc-f914-4ed7-8326-ff3db72c5246" containerName="dnsmasq-dns" containerID="cri-o://dfd0a6027c0c8493ead9b4bc432c8ecb5b485f48ea78962607f507e39418f231" gracePeriod=10 Jan 28 15:24:50 crc kubenswrapper[4981]: I0128 15:24:50.502901 4981 generic.go:334] "Generic (PLEG): container finished" podID="a50e2ce1-a7ce-4b26-b97d-6823b74cd974" containerID="95e7cb789aaeb7fab4347ecc141e8577533d007fc8ea5900d99242fc2942f77d" exitCode=0 Jan 28 15:24:50 crc kubenswrapper[4981]: I0128 15:24:50.503182 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fnttx" event={"ID":"a50e2ce1-a7ce-4b26-b97d-6823b74cd974","Type":"ContainerDied","Data":"95e7cb789aaeb7fab4347ecc141e8577533d007fc8ea5900d99242fc2942f77d"} Jan 28 15:24:50 crc kubenswrapper[4981]: I0128 15:24:50.564515 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 15:24:50 crc kubenswrapper[4981]: E0128 15:24:50.754755 4981 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod331416fc_f914_4ed7_8326_ff3db72c5246.slice/crio-dfd0a6027c0c8493ead9b4bc432c8ecb5b485f48ea78962607f507e39418f231.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod331416fc_f914_4ed7_8326_ff3db72c5246.slice/crio-conmon-dfd0a6027c0c8493ead9b4bc432c8ecb5b485f48ea78962607f507e39418f231.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.091843 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.092038 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.187:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.135915 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.144815 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.268720 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-dns-swift-storage-0\") pod \"331416fc-f914-4ed7-8326-ff3db72c5246\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.268774 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-combined-ca-bundle\") pod \"61d4eb9c-3032-485a-9c38-983eae66cbc8\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.268808 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-dns-svc\") pod \"331416fc-f914-4ed7-8326-ff3db72c5246\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.268945 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-config-data\") pod \"61d4eb9c-3032-485a-9c38-983eae66cbc8\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.268978 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4s6r\" (UniqueName: \"kubernetes.io/projected/331416fc-f914-4ed7-8326-ff3db72c5246-kube-api-access-s4s6r\") pod \"331416fc-f914-4ed7-8326-ff3db72c5246\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.269048 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-ovsdbserver-nb\") pod \"331416fc-f914-4ed7-8326-ff3db72c5246\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.269063 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-ovsdbserver-sb\") pod \"331416fc-f914-4ed7-8326-ff3db72c5246\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.269077 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngbss\" (UniqueName: \"kubernetes.io/projected/61d4eb9c-3032-485a-9c38-983eae66cbc8-kube-api-access-ngbss\") pod \"61d4eb9c-3032-485a-9c38-983eae66cbc8\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.269100 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-scripts\") pod \"61d4eb9c-3032-485a-9c38-983eae66cbc8\" (UID: \"61d4eb9c-3032-485a-9c38-983eae66cbc8\") " Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.269119 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-config\") pod \"331416fc-f914-4ed7-8326-ff3db72c5246\" (UID: \"331416fc-f914-4ed7-8326-ff3db72c5246\") " Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.305380 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/331416fc-f914-4ed7-8326-ff3db72c5246-kube-api-access-s4s6r" (OuterVolumeSpecName: "kube-api-access-s4s6r") pod "331416fc-f914-4ed7-8326-ff3db72c5246" (UID: "331416fc-f914-4ed7-8326-ff3db72c5246"). InnerVolumeSpecName "kube-api-access-s4s6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.306248 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61d4eb9c-3032-485a-9c38-983eae66cbc8-kube-api-access-ngbss" (OuterVolumeSpecName: "kube-api-access-ngbss") pod "61d4eb9c-3032-485a-9c38-983eae66cbc8" (UID: "61d4eb9c-3032-485a-9c38-983eae66cbc8"). InnerVolumeSpecName "kube-api-access-ngbss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.325388 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-scripts" (OuterVolumeSpecName: "scripts") pod "61d4eb9c-3032-485a-9c38-983eae66cbc8" (UID: "61d4eb9c-3032-485a-9c38-983eae66cbc8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.337562 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61d4eb9c-3032-485a-9c38-983eae66cbc8" (UID: "61d4eb9c-3032-485a-9c38-983eae66cbc8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.384323 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4s6r\" (UniqueName: \"kubernetes.io/projected/331416fc-f914-4ed7-8326-ff3db72c5246-kube-api-access-s4s6r\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.384351 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngbss\" (UniqueName: \"kubernetes.io/projected/61d4eb9c-3032-485a-9c38-983eae66cbc8-kube-api-access-ngbss\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.384360 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.384369 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.441832 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "331416fc-f914-4ed7-8326-ff3db72c5246" (UID: "331416fc-f914-4ed7-8326-ff3db72c5246"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.443243 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-config-data" (OuterVolumeSpecName: "config-data") pod "61d4eb9c-3032-485a-9c38-983eae66cbc8" (UID: "61d4eb9c-3032-485a-9c38-983eae66cbc8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.450221 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "331416fc-f914-4ed7-8326-ff3db72c5246" (UID: "331416fc-f914-4ed7-8326-ff3db72c5246"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.464804 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "331416fc-f914-4ed7-8326-ff3db72c5246" (UID: "331416fc-f914-4ed7-8326-ff3db72c5246"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.468512 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-config" (OuterVolumeSpecName: "config") pod "331416fc-f914-4ed7-8326-ff3db72c5246" (UID: "331416fc-f914-4ed7-8326-ff3db72c5246"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.472788 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "331416fc-f914-4ed7-8326-ff3db72c5246" (UID: "331416fc-f914-4ed7-8326-ff3db72c5246"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.486431 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.486462 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.486472 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.486484 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.486494 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/331416fc-f914-4ed7-8326-ff3db72c5246-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.486504 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61d4eb9c-3032-485a-9c38-983eae66cbc8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.514954 4981 generic.go:334] "Generic (PLEG): container finished" podID="331416fc-f914-4ed7-8326-ff3db72c5246" containerID="dfd0a6027c0c8493ead9b4bc432c8ecb5b485f48ea78962607f507e39418f231" exitCode=0 Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.515031 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" event={"ID":"331416fc-f914-4ed7-8326-ff3db72c5246","Type":"ContainerDied","Data":"dfd0a6027c0c8493ead9b4bc432c8ecb5b485f48ea78962607f507e39418f231"} Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.515067 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" event={"ID":"331416fc-f914-4ed7-8326-ff3db72c5246","Type":"ContainerDied","Data":"b1f64d7575d8909b46d18d98c8d1daa1dc3d0b88b550e7c08a09e283f70f1a61"} Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.515087 4981 scope.go:117] "RemoveContainer" containerID="dfd0a6027c0c8493ead9b4bc432c8ecb5b485f48ea78962607f507e39418f231" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.515285 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5784cf869f-z7d4r" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.518357 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-w4dpt" event={"ID":"61d4eb9c-3032-485a-9c38-983eae66cbc8","Type":"ContainerDied","Data":"452b566c9258e1fefe63872d77f7c9448fd4db9d863cc7d18264bab5e8e143c5"} Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.518394 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="452b566c9258e1fefe63872d77f7c9448fd4db9d863cc7d18264bab5e8e143c5" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.518435 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-w4dpt" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.543939 4981 scope.go:117] "RemoveContainer" containerID="5f3d403873b8c2defde5017c32acc5d63bd22b9d80e556c87a060fb4dee0a46e" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.572820 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-z7d4r"] Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.580609 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5784cf869f-z7d4r"] Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.611824 4981 scope.go:117] "RemoveContainer" containerID="dfd0a6027c0c8493ead9b4bc432c8ecb5b485f48ea78962607f507e39418f231" Jan 28 15:24:51 crc kubenswrapper[4981]: E0128 15:24:51.614260 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfd0a6027c0c8493ead9b4bc432c8ecb5b485f48ea78962607f507e39418f231\": container with ID starting with dfd0a6027c0c8493ead9b4bc432c8ecb5b485f48ea78962607f507e39418f231 not found: ID does not exist" containerID="dfd0a6027c0c8493ead9b4bc432c8ecb5b485f48ea78962607f507e39418f231" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.614297 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfd0a6027c0c8493ead9b4bc432c8ecb5b485f48ea78962607f507e39418f231"} err="failed to get container status \"dfd0a6027c0c8493ead9b4bc432c8ecb5b485f48ea78962607f507e39418f231\": rpc error: code = NotFound desc = could not find container \"dfd0a6027c0c8493ead9b4bc432c8ecb5b485f48ea78962607f507e39418f231\": container with ID starting with dfd0a6027c0c8493ead9b4bc432c8ecb5b485f48ea78962607f507e39418f231 not found: ID does not exist" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.614323 4981 scope.go:117] "RemoveContainer" containerID="5f3d403873b8c2defde5017c32acc5d63bd22b9d80e556c87a060fb4dee0a46e" Jan 28 15:24:51 crc kubenswrapper[4981]: E0128 15:24:51.615276 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f3d403873b8c2defde5017c32acc5d63bd22b9d80e556c87a060fb4dee0a46e\": container with ID starting with 5f3d403873b8c2defde5017c32acc5d63bd22b9d80e556c87a060fb4dee0a46e not found: ID does not exist" containerID="5f3d403873b8c2defde5017c32acc5d63bd22b9d80e556c87a060fb4dee0a46e" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.615299 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f3d403873b8c2defde5017c32acc5d63bd22b9d80e556c87a060fb4dee0a46e"} err="failed to get container status \"5f3d403873b8c2defde5017c32acc5d63bd22b9d80e556c87a060fb4dee0a46e\": rpc error: code = NotFound desc = could not find container \"5f3d403873b8c2defde5017c32acc5d63bd22b9d80e556c87a060fb4dee0a46e\": container with ID starting with 5f3d403873b8c2defde5017c32acc5d63bd22b9d80e556c87a060fb4dee0a46e not found: ID does not exist" Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.681360 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.681623 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" containerName="nova-api-log" containerID="cri-o://e70a822de1ba9d0ca0280767a1e06cac1c48fa614aa01c6d2013b0279f02dfd6" gracePeriod=30 Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.682120 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" containerName="nova-api-api" containerID="cri-o://2f1ab23bf744bd7935a340c3b805c6ecbf39887e8d7de3528560f394075d222d" gracePeriod=30 Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.934012 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 15:24:51 crc kubenswrapper[4981]: I0128 15:24:51.957994 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.001428 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-combined-ca-bundle\") pod \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.001523 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-scripts\") pod \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.001572 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-config-data\") pod \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.001595 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bntc8\" (UniqueName: \"kubernetes.io/projected/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-kube-api-access-bntc8\") pod \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\" (UID: \"a50e2ce1-a7ce-4b26-b97d-6823b74cd974\") " Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.006420 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-scripts" (OuterVolumeSpecName: "scripts") pod "a50e2ce1-a7ce-4b26-b97d-6823b74cd974" (UID: "a50e2ce1-a7ce-4b26-b97d-6823b74cd974"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.010300 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-kube-api-access-bntc8" (OuterVolumeSpecName: "kube-api-access-bntc8") pod "a50e2ce1-a7ce-4b26-b97d-6823b74cd974" (UID: "a50e2ce1-a7ce-4b26-b97d-6823b74cd974"). InnerVolumeSpecName "kube-api-access-bntc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.036085 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a50e2ce1-a7ce-4b26-b97d-6823b74cd974" (UID: "a50e2ce1-a7ce-4b26-b97d-6823b74cd974"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.053504 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-config-data" (OuterVolumeSpecName: "config-data") pod "a50e2ce1-a7ce-4b26-b97d-6823b74cd974" (UID: "a50e2ce1-a7ce-4b26-b97d-6823b74cd974"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.103798 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.103832 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.103842 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.103852 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bntc8\" (UniqueName: \"kubernetes.io/projected/a50e2ce1-a7ce-4b26-b97d-6823b74cd974-kube-api-access-bntc8\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.529349 4981 generic.go:334] "Generic (PLEG): container finished" podID="fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" containerID="e70a822de1ba9d0ca0280767a1e06cac1c48fa614aa01c6d2013b0279f02dfd6" exitCode=143 Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.529419 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a","Type":"ContainerDied","Data":"e70a822de1ba9d0ca0280767a1e06cac1c48fa614aa01c6d2013b0279f02dfd6"} Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.531593 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fnttx" event={"ID":"a50e2ce1-a7ce-4b26-b97d-6823b74cd974","Type":"ContainerDied","Data":"db50b83312c4b952954e35061170cd3032c684284e7dabeda5f0269b064217c2"} Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.531620 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db50b83312c4b952954e35061170cd3032c684284e7dabeda5f0269b064217c2" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.531709 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fnttx" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.536155 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="26c2d4af-5bbe-4a1a-a225-3a76eff41226" containerName="nova-scheduler-scheduler" containerID="cri-o://34c8accf0ec24a34eea54a5596df6cc00f44218f176c1c7cff06246884a43fa0" gracePeriod=30 Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.610921 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 15:24:52 crc kubenswrapper[4981]: E0128 15:24:52.611322 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="331416fc-f914-4ed7-8326-ff3db72c5246" containerName="dnsmasq-dns" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.611344 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="331416fc-f914-4ed7-8326-ff3db72c5246" containerName="dnsmasq-dns" Jan 28 15:24:52 crc kubenswrapper[4981]: E0128 15:24:52.611369 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a50e2ce1-a7ce-4b26-b97d-6823b74cd974" containerName="nova-cell1-conductor-db-sync" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.611375 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a50e2ce1-a7ce-4b26-b97d-6823b74cd974" containerName="nova-cell1-conductor-db-sync" Jan 28 15:24:52 crc kubenswrapper[4981]: E0128 15:24:52.611394 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="331416fc-f914-4ed7-8326-ff3db72c5246" containerName="init" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.611402 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="331416fc-f914-4ed7-8326-ff3db72c5246" containerName="init" Jan 28 15:24:52 crc kubenswrapper[4981]: E0128 15:24:52.611413 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61d4eb9c-3032-485a-9c38-983eae66cbc8" containerName="nova-manage" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.611421 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="61d4eb9c-3032-485a-9c38-983eae66cbc8" containerName="nova-manage" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.611653 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="61d4eb9c-3032-485a-9c38-983eae66cbc8" containerName="nova-manage" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.611676 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a50e2ce1-a7ce-4b26-b97d-6823b74cd974" containerName="nova-cell1-conductor-db-sync" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.611698 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="331416fc-f914-4ed7-8326-ff3db72c5246" containerName="dnsmasq-dns" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.612383 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.614717 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.630352 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.713752 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6994064-73bf-495e-928b-5ef46487e938-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f6994064-73bf-495e-928b-5ef46487e938\") " pod="openstack/nova-cell1-conductor-0" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.713875 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx7pd\" (UniqueName: \"kubernetes.io/projected/f6994064-73bf-495e-928b-5ef46487e938-kube-api-access-kx7pd\") pod \"nova-cell1-conductor-0\" (UID: \"f6994064-73bf-495e-928b-5ef46487e938\") " pod="openstack/nova-cell1-conductor-0" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.713990 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6994064-73bf-495e-928b-5ef46487e938-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f6994064-73bf-495e-928b-5ef46487e938\") " pod="openstack/nova-cell1-conductor-0" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.816306 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6994064-73bf-495e-928b-5ef46487e938-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f6994064-73bf-495e-928b-5ef46487e938\") " pod="openstack/nova-cell1-conductor-0" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.816367 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx7pd\" (UniqueName: \"kubernetes.io/projected/f6994064-73bf-495e-928b-5ef46487e938-kube-api-access-kx7pd\") pod \"nova-cell1-conductor-0\" (UID: \"f6994064-73bf-495e-928b-5ef46487e938\") " pod="openstack/nova-cell1-conductor-0" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.816416 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6994064-73bf-495e-928b-5ef46487e938-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f6994064-73bf-495e-928b-5ef46487e938\") " pod="openstack/nova-cell1-conductor-0" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.821944 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6994064-73bf-495e-928b-5ef46487e938-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f6994064-73bf-495e-928b-5ef46487e938\") " pod="openstack/nova-cell1-conductor-0" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.823732 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6994064-73bf-495e-928b-5ef46487e938-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f6994064-73bf-495e-928b-5ef46487e938\") " pod="openstack/nova-cell1-conductor-0" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.830950 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx7pd\" (UniqueName: \"kubernetes.io/projected/f6994064-73bf-495e-928b-5ef46487e938-kube-api-access-kx7pd\") pod \"nova-cell1-conductor-0\" (UID: \"f6994064-73bf-495e-928b-5ef46487e938\") " pod="openstack/nova-cell1-conductor-0" Jan 28 15:24:52 crc kubenswrapper[4981]: I0128 15:24:52.941620 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 15:24:53 crc kubenswrapper[4981]: I0128 15:24:53.328268 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="331416fc-f914-4ed7-8326-ff3db72c5246" path="/var/lib/kubelet/pods/331416fc-f914-4ed7-8326-ff3db72c5246/volumes" Jan 28 15:24:53 crc kubenswrapper[4981]: I0128 15:24:53.435900 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 15:24:53 crc kubenswrapper[4981]: I0128 15:24:53.551504 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f6994064-73bf-495e-928b-5ef46487e938","Type":"ContainerStarted","Data":"83769ae04f129a92fef2f9078fbac570e7b653de872c396264e37417eea9147b"} Jan 28 15:24:54 crc kubenswrapper[4981]: I0128 15:24:54.559694 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f6994064-73bf-495e-928b-5ef46487e938","Type":"ContainerStarted","Data":"565c60637bed06da01be5ec13b47d0b6a212a0b3ba0b73ca1b1ffdcf2e50a439"} Jan 28 15:24:54 crc kubenswrapper[4981]: I0128 15:24:54.561154 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 28 15:24:54 crc kubenswrapper[4981]: I0128 15:24:54.581523 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.581491258 podStartE2EDuration="2.581491258s" podCreationTimestamp="2026-01-28 15:24:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:54.574537311 +0000 UTC m=+1306.026695612" watchObservedRunningTime="2026-01-28 15:24:54.581491258 +0000 UTC m=+1306.033649559" Jan 28 15:24:55 crc kubenswrapper[4981]: E0128 15:24:55.177576 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="34c8accf0ec24a34eea54a5596df6cc00f44218f176c1c7cff06246884a43fa0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:24:55 crc kubenswrapper[4981]: E0128 15:24:55.180224 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="34c8accf0ec24a34eea54a5596df6cc00f44218f176c1c7cff06246884a43fa0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:24:55 crc kubenswrapper[4981]: E0128 15:24:55.181755 4981 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="34c8accf0ec24a34eea54a5596df6cc00f44218f176c1c7cff06246884a43fa0" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 15:24:55 crc kubenswrapper[4981]: E0128 15:24:55.181792 4981 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="26c2d4af-5bbe-4a1a-a225-3a76eff41226" containerName="nova-scheduler-scheduler" Jan 28 15:24:56 crc kubenswrapper[4981]: I0128 15:24:56.584542 4981 generic.go:334] "Generic (PLEG): container finished" podID="26c2d4af-5bbe-4a1a-a225-3a76eff41226" containerID="34c8accf0ec24a34eea54a5596df6cc00f44218f176c1c7cff06246884a43fa0" exitCode=0 Jan 28 15:24:56 crc kubenswrapper[4981]: I0128 15:24:56.584762 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"26c2d4af-5bbe-4a1a-a225-3a76eff41226","Type":"ContainerDied","Data":"34c8accf0ec24a34eea54a5596df6cc00f44218f176c1c7cff06246884a43fa0"} Jan 28 15:24:56 crc kubenswrapper[4981]: I0128 15:24:56.900249 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 15:24:56 crc kubenswrapper[4981]: I0128 15:24:56.999106 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c2d4af-5bbe-4a1a-a225-3a76eff41226-combined-ca-bundle\") pod \"26c2d4af-5bbe-4a1a-a225-3a76eff41226\" (UID: \"26c2d4af-5bbe-4a1a-a225-3a76eff41226\") " Jan 28 15:24:56 crc kubenswrapper[4981]: I0128 15:24:56.999151 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c2d4af-5bbe-4a1a-a225-3a76eff41226-config-data\") pod \"26c2d4af-5bbe-4a1a-a225-3a76eff41226\" (UID: \"26c2d4af-5bbe-4a1a-a225-3a76eff41226\") " Jan 28 15:24:56 crc kubenswrapper[4981]: I0128 15:24:56.999283 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqwss\" (UniqueName: \"kubernetes.io/projected/26c2d4af-5bbe-4a1a-a225-3a76eff41226-kube-api-access-zqwss\") pod \"26c2d4af-5bbe-4a1a-a225-3a76eff41226\" (UID: \"26c2d4af-5bbe-4a1a-a225-3a76eff41226\") " Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.010446 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c2d4af-5bbe-4a1a-a225-3a76eff41226-kube-api-access-zqwss" (OuterVolumeSpecName: "kube-api-access-zqwss") pod "26c2d4af-5bbe-4a1a-a225-3a76eff41226" (UID: "26c2d4af-5bbe-4a1a-a225-3a76eff41226"). InnerVolumeSpecName "kube-api-access-zqwss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.030747 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26c2d4af-5bbe-4a1a-a225-3a76eff41226-config-data" (OuterVolumeSpecName: "config-data") pod "26c2d4af-5bbe-4a1a-a225-3a76eff41226" (UID: "26c2d4af-5bbe-4a1a-a225-3a76eff41226"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.032681 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26c2d4af-5bbe-4a1a-a225-3a76eff41226-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26c2d4af-5bbe-4a1a-a225-3a76eff41226" (UID: "26c2d4af-5bbe-4a1a-a225-3a76eff41226"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.101734 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqwss\" (UniqueName: \"kubernetes.io/projected/26c2d4af-5bbe-4a1a-a225-3a76eff41226-kube-api-access-zqwss\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.101768 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c2d4af-5bbe-4a1a-a225-3a76eff41226-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.101783 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c2d4af-5bbe-4a1a-a225-3a76eff41226-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.546986 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.594421 4981 generic.go:334] "Generic (PLEG): container finished" podID="fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" containerID="2f1ab23bf744bd7935a340c3b805c6ecbf39887e8d7de3528560f394075d222d" exitCode=0 Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.594474 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a","Type":"ContainerDied","Data":"2f1ab23bf744bd7935a340c3b805c6ecbf39887e8d7de3528560f394075d222d"} Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.594497 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a","Type":"ContainerDied","Data":"fe2925d84a934d725cc07f3a466301c33f302df20066bd645e510815f039353a"} Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.594514 4981 scope.go:117] "RemoveContainer" containerID="2f1ab23bf744bd7935a340c3b805c6ecbf39887e8d7de3528560f394075d222d" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.594620 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.609537 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpftv\" (UniqueName: \"kubernetes.io/projected/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-kube-api-access-hpftv\") pod \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.609681 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-config-data\") pod \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.609722 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-logs\") pod \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.609755 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-combined-ca-bundle\") pod \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\" (UID: \"fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a\") " Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.610274 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-logs" (OuterVolumeSpecName: "logs") pod "fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" (UID: "fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.610716 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.613773 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"26c2d4af-5bbe-4a1a-a225-3a76eff41226","Type":"ContainerDied","Data":"2f2c3af610369c8a196431212fca64467c25cac5d9220155adf93307df76f1b5"} Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.613830 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.614278 4981 scope.go:117] "RemoveContainer" containerID="e70a822de1ba9d0ca0280767a1e06cac1c48fa614aa01c6d2013b0279f02dfd6" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.616380 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-kube-api-access-hpftv" (OuterVolumeSpecName: "kube-api-access-hpftv") pod "fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" (UID: "fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a"). InnerVolumeSpecName "kube-api-access-hpftv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.632469 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-config-data" (OuterVolumeSpecName: "config-data") pod "fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" (UID: "fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.634770 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" (UID: "fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.712309 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.712339 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpftv\" (UniqueName: \"kubernetes.io/projected/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-kube-api-access-hpftv\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.712350 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.712748 4981 scope.go:117] "RemoveContainer" containerID="2f1ab23bf744bd7935a340c3b805c6ecbf39887e8d7de3528560f394075d222d" Jan 28 15:24:57 crc kubenswrapper[4981]: E0128 15:24:57.713140 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1ab23bf744bd7935a340c3b805c6ecbf39887e8d7de3528560f394075d222d\": container with ID starting with 2f1ab23bf744bd7935a340c3b805c6ecbf39887e8d7de3528560f394075d222d not found: ID does not exist" containerID="2f1ab23bf744bd7935a340c3b805c6ecbf39887e8d7de3528560f394075d222d" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.713231 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1ab23bf744bd7935a340c3b805c6ecbf39887e8d7de3528560f394075d222d"} err="failed to get container status \"2f1ab23bf744bd7935a340c3b805c6ecbf39887e8d7de3528560f394075d222d\": rpc error: code = NotFound desc = could not find container \"2f1ab23bf744bd7935a340c3b805c6ecbf39887e8d7de3528560f394075d222d\": container with ID starting with 2f1ab23bf744bd7935a340c3b805c6ecbf39887e8d7de3528560f394075d222d not found: ID does not exist" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.713258 4981 scope.go:117] "RemoveContainer" containerID="e70a822de1ba9d0ca0280767a1e06cac1c48fa614aa01c6d2013b0279f02dfd6" Jan 28 15:24:57 crc kubenswrapper[4981]: E0128 15:24:57.713540 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e70a822de1ba9d0ca0280767a1e06cac1c48fa614aa01c6d2013b0279f02dfd6\": container with ID starting with e70a822de1ba9d0ca0280767a1e06cac1c48fa614aa01c6d2013b0279f02dfd6 not found: ID does not exist" containerID="e70a822de1ba9d0ca0280767a1e06cac1c48fa614aa01c6d2013b0279f02dfd6" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.713570 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e70a822de1ba9d0ca0280767a1e06cac1c48fa614aa01c6d2013b0279f02dfd6"} err="failed to get container status \"e70a822de1ba9d0ca0280767a1e06cac1c48fa614aa01c6d2013b0279f02dfd6\": rpc error: code = NotFound desc = could not find container \"e70a822de1ba9d0ca0280767a1e06cac1c48fa614aa01c6d2013b0279f02dfd6\": container with ID starting with e70a822de1ba9d0ca0280767a1e06cac1c48fa614aa01c6d2013b0279f02dfd6 not found: ID does not exist" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.713592 4981 scope.go:117] "RemoveContainer" containerID="34c8accf0ec24a34eea54a5596df6cc00f44218f176c1c7cff06246884a43fa0" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.741530 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.757115 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.769008 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 15:24:57 crc kubenswrapper[4981]: E0128 15:24:57.769720 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" containerName="nova-api-log" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.769738 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" containerName="nova-api-log" Jan 28 15:24:57 crc kubenswrapper[4981]: E0128 15:24:57.769771 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" containerName="nova-api-api" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.769778 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" containerName="nova-api-api" Jan 28 15:24:57 crc kubenswrapper[4981]: E0128 15:24:57.769804 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c2d4af-5bbe-4a1a-a225-3a76eff41226" containerName="nova-scheduler-scheduler" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.769810 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c2d4af-5bbe-4a1a-a225-3a76eff41226" containerName="nova-scheduler-scheduler" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.770075 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" containerName="nova-api-log" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.770094 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c2d4af-5bbe-4a1a-a225-3a76eff41226" containerName="nova-scheduler-scheduler" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.770114 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" containerName="nova-api-api" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.771035 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.773703 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.787080 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.814341 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2nm6\" (UniqueName: \"kubernetes.io/projected/c99460aa-0162-4b6e-ad22-5f99def5df5e-kube-api-access-c2nm6\") pod \"nova-scheduler-0\" (UID: \"c99460aa-0162-4b6e-ad22-5f99def5df5e\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.814439 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99460aa-0162-4b6e-ad22-5f99def5df5e-config-data\") pod \"nova-scheduler-0\" (UID: \"c99460aa-0162-4b6e-ad22-5f99def5df5e\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.814586 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99460aa-0162-4b6e-ad22-5f99def5df5e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c99460aa-0162-4b6e-ad22-5f99def5df5e\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.918116 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99460aa-0162-4b6e-ad22-5f99def5df5e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c99460aa-0162-4b6e-ad22-5f99def5df5e\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.918316 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2nm6\" (UniqueName: \"kubernetes.io/projected/c99460aa-0162-4b6e-ad22-5f99def5df5e-kube-api-access-c2nm6\") pod \"nova-scheduler-0\" (UID: \"c99460aa-0162-4b6e-ad22-5f99def5df5e\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.918381 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99460aa-0162-4b6e-ad22-5f99def5df5e-config-data\") pod \"nova-scheduler-0\" (UID: \"c99460aa-0162-4b6e-ad22-5f99def5df5e\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.923736 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99460aa-0162-4b6e-ad22-5f99def5df5e-config-data\") pod \"nova-scheduler-0\" (UID: \"c99460aa-0162-4b6e-ad22-5f99def5df5e\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.927207 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99460aa-0162-4b6e-ad22-5f99def5df5e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c99460aa-0162-4b6e-ad22-5f99def5df5e\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.933381 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.942735 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2nm6\" (UniqueName: \"kubernetes.io/projected/c99460aa-0162-4b6e-ad22-5f99def5df5e-kube-api-access-c2nm6\") pod \"nova-scheduler-0\" (UID: \"c99460aa-0162-4b6e-ad22-5f99def5df5e\") " pod="openstack/nova-scheduler-0" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.943953 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.953819 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.955473 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.957671 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 15:24:57 crc kubenswrapper[4981]: I0128 15:24:57.962164 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.020073 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-logs\") pod \"nova-api-0\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " pod="openstack/nova-api-0" Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.020122 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scm9w\" (UniqueName: \"kubernetes.io/projected/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-kube-api-access-scm9w\") pod \"nova-api-0\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " pod="openstack/nova-api-0" Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.020303 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " pod="openstack/nova-api-0" Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.020377 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-config-data\") pod \"nova-api-0\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " pod="openstack/nova-api-0" Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.095638 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.121768 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-logs\") pod \"nova-api-0\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " pod="openstack/nova-api-0" Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.121812 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scm9w\" (UniqueName: \"kubernetes.io/projected/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-kube-api-access-scm9w\") pod \"nova-api-0\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " pod="openstack/nova-api-0" Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.121896 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " pod="openstack/nova-api-0" Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.121953 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-config-data\") pod \"nova-api-0\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " pod="openstack/nova-api-0" Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.122244 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-logs\") pod \"nova-api-0\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " pod="openstack/nova-api-0" Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.126669 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " pod="openstack/nova-api-0" Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.126852 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-config-data\") pod \"nova-api-0\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " pod="openstack/nova-api-0" Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.138420 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scm9w\" (UniqueName: \"kubernetes.io/projected/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-kube-api-access-scm9w\") pod \"nova-api-0\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " pod="openstack/nova-api-0" Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.295059 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.537369 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.623917 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c99460aa-0162-4b6e-ad22-5f99def5df5e","Type":"ContainerStarted","Data":"6bd41f92b18e14eeabf432bdcec55055d8e1a11d2af76c26d7aa05f8b0f585ec"} Jan 28 15:24:58 crc kubenswrapper[4981]: I0128 15:24:58.828366 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:24:58 crc kubenswrapper[4981]: W0128 15:24:58.834219 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2b0f8b7_65ba_459e_a209_7349db6a0ba2.slice/crio-dd311ec3c811468eae5e322ded8a23ac2e77ec77ea7dd43c7e9280b7842b13e6 WatchSource:0}: Error finding container dd311ec3c811468eae5e322ded8a23ac2e77ec77ea7dd43c7e9280b7842b13e6: Status 404 returned error can't find the container with id dd311ec3c811468eae5e322ded8a23ac2e77ec77ea7dd43c7e9280b7842b13e6 Jan 28 15:24:59 crc kubenswrapper[4981]: I0128 15:24:59.329304 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c2d4af-5bbe-4a1a-a225-3a76eff41226" path="/var/lib/kubelet/pods/26c2d4af-5bbe-4a1a-a225-3a76eff41226/volumes" Jan 28 15:24:59 crc kubenswrapper[4981]: I0128 15:24:59.330354 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a" path="/var/lib/kubelet/pods/fdce4c4b-cc40-45e2-a4d8-fcf60ed4db9a/volumes" Jan 28 15:24:59 crc kubenswrapper[4981]: I0128 15:24:59.633680 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c99460aa-0162-4b6e-ad22-5f99def5df5e","Type":"ContainerStarted","Data":"962112e0cf7495f9ca2267b48bc4c1ba4abbbea396f47ad9597532c95522bda8"} Jan 28 15:24:59 crc kubenswrapper[4981]: I0128 15:24:59.636288 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b0f8b7-65ba-459e-a209-7349db6a0ba2","Type":"ContainerStarted","Data":"cd5207b105c82a367d3c808f33c579d27a6b784d8c09d362993f3c1423f822da"} Jan 28 15:24:59 crc kubenswrapper[4981]: I0128 15:24:59.636322 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b0f8b7-65ba-459e-a209-7349db6a0ba2","Type":"ContainerStarted","Data":"9253d3964d74b93e45879373a6a1e9160b9ce2c0509ccd6c5dd1db2a6cf4e434"} Jan 28 15:24:59 crc kubenswrapper[4981]: I0128 15:24:59.636334 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b0f8b7-65ba-459e-a209-7349db6a0ba2","Type":"ContainerStarted","Data":"dd311ec3c811468eae5e322ded8a23ac2e77ec77ea7dd43c7e9280b7842b13e6"} Jan 28 15:24:59 crc kubenswrapper[4981]: I0128 15:24:59.658729 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.65871183 podStartE2EDuration="2.65871183s" podCreationTimestamp="2026-01-28 15:24:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:59.650725545 +0000 UTC m=+1311.102883786" watchObservedRunningTime="2026-01-28 15:24:59.65871183 +0000 UTC m=+1311.110870071" Jan 28 15:24:59 crc kubenswrapper[4981]: I0128 15:24:59.674862 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.674842854 podStartE2EDuration="2.674842854s" podCreationTimestamp="2026-01-28 15:24:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:24:59.668798361 +0000 UTC m=+1311.120956602" watchObservedRunningTime="2026-01-28 15:24:59.674842854 +0000 UTC m=+1311.127001085" Jan 28 15:25:00 crc kubenswrapper[4981]: I0128 15:25:00.686535 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 15:25:02 crc kubenswrapper[4981]: I0128 15:25:02.993800 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 28 15:25:03 crc kubenswrapper[4981]: I0128 15:25:03.097323 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 15:25:04 crc kubenswrapper[4981]: I0128 15:25:04.367622 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:25:04 crc kubenswrapper[4981]: I0128 15:25:04.368116 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="756f0fb3-a2dc-4084-bd40-85fa0bf855bd" containerName="kube-state-metrics" containerID="cri-o://690dd72fb979939021095f044a3c5d5b6a2df24f6357b3c47a23e68bdf325490" gracePeriod=30 Jan 28 15:25:04 crc kubenswrapper[4981]: I0128 15:25:04.681502 4981 generic.go:334] "Generic (PLEG): container finished" podID="756f0fb3-a2dc-4084-bd40-85fa0bf855bd" containerID="690dd72fb979939021095f044a3c5d5b6a2df24f6357b3c47a23e68bdf325490" exitCode=2 Jan 28 15:25:04 crc kubenswrapper[4981]: I0128 15:25:04.681564 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"756f0fb3-a2dc-4084-bd40-85fa0bf855bd","Type":"ContainerDied","Data":"690dd72fb979939021095f044a3c5d5b6a2df24f6357b3c47a23e68bdf325490"} Jan 28 15:25:04 crc kubenswrapper[4981]: I0128 15:25:04.871360 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 15:25:04 crc kubenswrapper[4981]: I0128 15:25:04.963427 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxd4g\" (UniqueName: \"kubernetes.io/projected/756f0fb3-a2dc-4084-bd40-85fa0bf855bd-kube-api-access-cxd4g\") pod \"756f0fb3-a2dc-4084-bd40-85fa0bf855bd\" (UID: \"756f0fb3-a2dc-4084-bd40-85fa0bf855bd\") " Jan 28 15:25:04 crc kubenswrapper[4981]: I0128 15:25:04.976476 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/756f0fb3-a2dc-4084-bd40-85fa0bf855bd-kube-api-access-cxd4g" (OuterVolumeSpecName: "kube-api-access-cxd4g") pod "756f0fb3-a2dc-4084-bd40-85fa0bf855bd" (UID: "756f0fb3-a2dc-4084-bd40-85fa0bf855bd"). InnerVolumeSpecName "kube-api-access-cxd4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.065372 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxd4g\" (UniqueName: \"kubernetes.io/projected/756f0fb3-a2dc-4084-bd40-85fa0bf855bd-kube-api-access-cxd4g\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.694263 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"756f0fb3-a2dc-4084-bd40-85fa0bf855bd","Type":"ContainerDied","Data":"04aea29c7fc5b6f1bc58178b3ff3341af23045cb4ed9d21028d3ad3ec684056b"} Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.694343 4981 scope.go:117] "RemoveContainer" containerID="690dd72fb979939021095f044a3c5d5b6a2df24f6357b3c47a23e68bdf325490" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.694535 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.728096 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.774758 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.775128 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:25:05 crc kubenswrapper[4981]: E0128 15:25:05.775519 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756f0fb3-a2dc-4084-bd40-85fa0bf855bd" containerName="kube-state-metrics" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.775537 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="756f0fb3-a2dc-4084-bd40-85fa0bf855bd" containerName="kube-state-metrics" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.775787 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="756f0fb3-a2dc-4084-bd40-85fa0bf855bd" containerName="kube-state-metrics" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.776425 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.776526 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.783579 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.783638 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.886227 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xr9q\" (UniqueName: \"kubernetes.io/projected/ead8d0cb-bf17-4ff6-b6ed-65c7205194cc-kube-api-access-9xr9q\") pod \"kube-state-metrics-0\" (UID: \"ead8d0cb-bf17-4ff6-b6ed-65c7205194cc\") " pod="openstack/kube-state-metrics-0" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.886608 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead8d0cb-bf17-4ff6-b6ed-65c7205194cc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ead8d0cb-bf17-4ff6-b6ed-65c7205194cc\") " pod="openstack/kube-state-metrics-0" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.886706 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ead8d0cb-bf17-4ff6-b6ed-65c7205194cc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ead8d0cb-bf17-4ff6-b6ed-65c7205194cc\") " pod="openstack/kube-state-metrics-0" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.886943 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead8d0cb-bf17-4ff6-b6ed-65c7205194cc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ead8d0cb-bf17-4ff6-b6ed-65c7205194cc\") " pod="openstack/kube-state-metrics-0" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.988622 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead8d0cb-bf17-4ff6-b6ed-65c7205194cc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ead8d0cb-bf17-4ff6-b6ed-65c7205194cc\") " pod="openstack/kube-state-metrics-0" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.988731 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xr9q\" (UniqueName: \"kubernetes.io/projected/ead8d0cb-bf17-4ff6-b6ed-65c7205194cc-kube-api-access-9xr9q\") pod \"kube-state-metrics-0\" (UID: \"ead8d0cb-bf17-4ff6-b6ed-65c7205194cc\") " pod="openstack/kube-state-metrics-0" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.988885 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead8d0cb-bf17-4ff6-b6ed-65c7205194cc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ead8d0cb-bf17-4ff6-b6ed-65c7205194cc\") " pod="openstack/kube-state-metrics-0" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.988930 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ead8d0cb-bf17-4ff6-b6ed-65c7205194cc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ead8d0cb-bf17-4ff6-b6ed-65c7205194cc\") " pod="openstack/kube-state-metrics-0" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.994221 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/ead8d0cb-bf17-4ff6-b6ed-65c7205194cc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"ead8d0cb-bf17-4ff6-b6ed-65c7205194cc\") " pod="openstack/kube-state-metrics-0" Jan 28 15:25:05 crc kubenswrapper[4981]: I0128 15:25:05.995898 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ead8d0cb-bf17-4ff6-b6ed-65c7205194cc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"ead8d0cb-bf17-4ff6-b6ed-65c7205194cc\") " pod="openstack/kube-state-metrics-0" Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.001063 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/ead8d0cb-bf17-4ff6-b6ed-65c7205194cc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"ead8d0cb-bf17-4ff6-b6ed-65c7205194cc\") " pod="openstack/kube-state-metrics-0" Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.014971 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xr9q\" (UniqueName: \"kubernetes.io/projected/ead8d0cb-bf17-4ff6-b6ed-65c7205194cc-kube-api-access-9xr9q\") pod \"kube-state-metrics-0\" (UID: \"ead8d0cb-bf17-4ff6-b6ed-65c7205194cc\") " pod="openstack/kube-state-metrics-0" Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.094533 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.094803 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="ceilometer-central-agent" containerID="cri-o://a792d74426a84d1e8b782eb765fb9849d992317e6a398b23264f64a37feca00d" gracePeriod=30 Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.094852 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="sg-core" containerID="cri-o://c837b985a29893346264aa885325b28d426d273e03c56a877d6622d21ebb6394" gracePeriod=30 Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.094930 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="proxy-httpd" containerID="cri-o://3a0f9b90f6d8855e669c270fa1947f5460d71bada5972eb0e994890c1823d862" gracePeriod=30 Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.094941 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="ceilometer-notification-agent" containerID="cri-o://45a1d31fe914071bf1e29dd9a450d7aa3a7ff8eae89c38ea213cefbeb7ea8e1b" gracePeriod=30 Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.109523 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.557782 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 15:25:06 crc kubenswrapper[4981]: W0128 15:25:06.559603 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podead8d0cb_bf17_4ff6_b6ed_65c7205194cc.slice/crio-53ea107987ec2f9d78cfb57993e77d22387707a659059f585c1fd3afdff9a334 WatchSource:0}: Error finding container 53ea107987ec2f9d78cfb57993e77d22387707a659059f585c1fd3afdff9a334: Status 404 returned error can't find the container with id 53ea107987ec2f9d78cfb57993e77d22387707a659059f585c1fd3afdff9a334 Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.708443 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ead8d0cb-bf17-4ff6-b6ed-65c7205194cc","Type":"ContainerStarted","Data":"53ea107987ec2f9d78cfb57993e77d22387707a659059f585c1fd3afdff9a334"} Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.713119 4981 generic.go:334] "Generic (PLEG): container finished" podID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerID="3a0f9b90f6d8855e669c270fa1947f5460d71bada5972eb0e994890c1823d862" exitCode=0 Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.713248 4981 generic.go:334] "Generic (PLEG): container finished" podID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerID="c837b985a29893346264aa885325b28d426d273e03c56a877d6622d21ebb6394" exitCode=2 Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.713326 4981 generic.go:334] "Generic (PLEG): container finished" podID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerID="a792d74426a84d1e8b782eb765fb9849d992317e6a398b23264f64a37feca00d" exitCode=0 Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.713281 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f75ac87-89fc-4468-abd3-7347faabc1dd","Type":"ContainerDied","Data":"3a0f9b90f6d8855e669c270fa1947f5460d71bada5972eb0e994890c1823d862"} Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.713469 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f75ac87-89fc-4468-abd3-7347faabc1dd","Type":"ContainerDied","Data":"c837b985a29893346264aa885325b28d426d273e03c56a877d6622d21ebb6394"} Jan 28 15:25:06 crc kubenswrapper[4981]: I0128 15:25:06.713502 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f75ac87-89fc-4468-abd3-7347faabc1dd","Type":"ContainerDied","Data":"a792d74426a84d1e8b782eb765fb9849d992317e6a398b23264f64a37feca00d"} Jan 28 15:25:07 crc kubenswrapper[4981]: I0128 15:25:07.344549 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="756f0fb3-a2dc-4084-bd40-85fa0bf855bd" path="/var/lib/kubelet/pods/756f0fb3-a2dc-4084-bd40-85fa0bf855bd/volumes" Jan 28 15:25:07 crc kubenswrapper[4981]: I0128 15:25:07.728180 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ead8d0cb-bf17-4ff6-b6ed-65c7205194cc","Type":"ContainerStarted","Data":"1574c8ea9111e77856c20445661c13517720b4551734f5e01eef386decaf31d2"} Jan 28 15:25:07 crc kubenswrapper[4981]: I0128 15:25:07.729487 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 15:25:07 crc kubenswrapper[4981]: I0128 15:25:07.759490 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.395724052 podStartE2EDuration="2.759462317s" podCreationTimestamp="2026-01-28 15:25:05 +0000 UTC" firstStartedPulling="2026-01-28 15:25:06.562024364 +0000 UTC m=+1318.014182605" lastFinishedPulling="2026-01-28 15:25:06.925762589 +0000 UTC m=+1318.377920870" observedRunningTime="2026-01-28 15:25:07.753911908 +0000 UTC m=+1319.206070159" watchObservedRunningTime="2026-01-28 15:25:07.759462317 +0000 UTC m=+1319.211620568" Jan 28 15:25:08 crc kubenswrapper[4981]: I0128 15:25:08.096968 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 15:25:08 crc kubenswrapper[4981]: I0128 15:25:08.128970 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 15:25:08 crc kubenswrapper[4981]: I0128 15:25:08.295613 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 15:25:08 crc kubenswrapper[4981]: I0128 15:25:08.295692 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 15:25:08 crc kubenswrapper[4981]: I0128 15:25:08.778276 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.369964 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.379426 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a2b0f8b7-65ba-459e-a209-7349db6a0ba2" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.379700 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a2b0f8b7-65ba-459e-a209-7349db6a0ba2" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.194:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.459615 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-config-data\") pod \"3f75ac87-89fc-4468-abd3-7347faabc1dd\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.459668 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-sg-core-conf-yaml\") pod \"3f75ac87-89fc-4468-abd3-7347faabc1dd\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.459722 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f75ac87-89fc-4468-abd3-7347faabc1dd-run-httpd\") pod \"3f75ac87-89fc-4468-abd3-7347faabc1dd\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.459846 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-scripts\") pod \"3f75ac87-89fc-4468-abd3-7347faabc1dd\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.459901 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f75ac87-89fc-4468-abd3-7347faabc1dd-log-httpd\") pod \"3f75ac87-89fc-4468-abd3-7347faabc1dd\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.459998 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xxm5\" (UniqueName: \"kubernetes.io/projected/3f75ac87-89fc-4468-abd3-7347faabc1dd-kube-api-access-8xxm5\") pod \"3f75ac87-89fc-4468-abd3-7347faabc1dd\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.460080 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-combined-ca-bundle\") pod \"3f75ac87-89fc-4468-abd3-7347faabc1dd\" (UID: \"3f75ac87-89fc-4468-abd3-7347faabc1dd\") " Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.460293 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f75ac87-89fc-4468-abd3-7347faabc1dd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3f75ac87-89fc-4468-abd3-7347faabc1dd" (UID: "3f75ac87-89fc-4468-abd3-7347faabc1dd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.460840 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f75ac87-89fc-4468-abd3-7347faabc1dd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3f75ac87-89fc-4468-abd3-7347faabc1dd" (UID: "3f75ac87-89fc-4468-abd3-7347faabc1dd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.462856 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f75ac87-89fc-4468-abd3-7347faabc1dd-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.463334 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3f75ac87-89fc-4468-abd3-7347faabc1dd-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.465935 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f75ac87-89fc-4468-abd3-7347faabc1dd-kube-api-access-8xxm5" (OuterVolumeSpecName: "kube-api-access-8xxm5") pod "3f75ac87-89fc-4468-abd3-7347faabc1dd" (UID: "3f75ac87-89fc-4468-abd3-7347faabc1dd"). InnerVolumeSpecName "kube-api-access-8xxm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.466206 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-scripts" (OuterVolumeSpecName: "scripts") pod "3f75ac87-89fc-4468-abd3-7347faabc1dd" (UID: "3f75ac87-89fc-4468-abd3-7347faabc1dd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.505562 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3f75ac87-89fc-4468-abd3-7347faabc1dd" (UID: "3f75ac87-89fc-4468-abd3-7347faabc1dd"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.546102 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f75ac87-89fc-4468-abd3-7347faabc1dd" (UID: "3f75ac87-89fc-4468-abd3-7347faabc1dd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.565377 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.565420 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.565434 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.565446 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xxm5\" (UniqueName: \"kubernetes.io/projected/3f75ac87-89fc-4468-abd3-7347faabc1dd-kube-api-access-8xxm5\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.581160 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-config-data" (OuterVolumeSpecName: "config-data") pod "3f75ac87-89fc-4468-abd3-7347faabc1dd" (UID: "3f75ac87-89fc-4468-abd3-7347faabc1dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.666679 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f75ac87-89fc-4468-abd3-7347faabc1dd-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.748770 4981 generic.go:334] "Generic (PLEG): container finished" podID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerID="45a1d31fe914071bf1e29dd9a450d7aa3a7ff8eae89c38ea213cefbeb7ea8e1b" exitCode=0 Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.748831 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f75ac87-89fc-4468-abd3-7347faabc1dd","Type":"ContainerDied","Data":"45a1d31fe914071bf1e29dd9a450d7aa3a7ff8eae89c38ea213cefbeb7ea8e1b"} Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.748884 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.748915 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3f75ac87-89fc-4468-abd3-7347faabc1dd","Type":"ContainerDied","Data":"a4bd9f859ee9fb3759f15b0498ad78ec210436847c10286c62f03762873446e5"} Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.748939 4981 scope.go:117] "RemoveContainer" containerID="3a0f9b90f6d8855e669c270fa1947f5460d71bada5972eb0e994890c1823d862" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.778804 4981 scope.go:117] "RemoveContainer" containerID="c837b985a29893346264aa885325b28d426d273e03c56a877d6622d21ebb6394" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.804765 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.810086 4981 scope.go:117] "RemoveContainer" containerID="45a1d31fe914071bf1e29dd9a450d7aa3a7ff8eae89c38ea213cefbeb7ea8e1b" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.829857 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.857582 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:25:09 crc kubenswrapper[4981]: E0128 15:25:09.858310 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="proxy-httpd" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.858331 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="proxy-httpd" Jan 28 15:25:09 crc kubenswrapper[4981]: E0128 15:25:09.858357 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="ceilometer-notification-agent" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.858365 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="ceilometer-notification-agent" Jan 28 15:25:09 crc kubenswrapper[4981]: E0128 15:25:09.858383 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="ceilometer-central-agent" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.858389 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="ceilometer-central-agent" Jan 28 15:25:09 crc kubenswrapper[4981]: E0128 15:25:09.858400 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="sg-core" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.858405 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="sg-core" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.858632 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="ceilometer-central-agent" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.858647 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="ceilometer-notification-agent" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.858665 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="sg-core" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.858679 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" containerName="proxy-httpd" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.859836 4981 scope.go:117] "RemoveContainer" containerID="a792d74426a84d1e8b782eb765fb9849d992317e6a398b23264f64a37feca00d" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.860596 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.862405 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.865421 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.865538 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.865794 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.907509 4981 scope.go:117] "RemoveContainer" containerID="3a0f9b90f6d8855e669c270fa1947f5460d71bada5972eb0e994890c1823d862" Jan 28 15:25:09 crc kubenswrapper[4981]: E0128 15:25:09.908061 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a0f9b90f6d8855e669c270fa1947f5460d71bada5972eb0e994890c1823d862\": container with ID starting with 3a0f9b90f6d8855e669c270fa1947f5460d71bada5972eb0e994890c1823d862 not found: ID does not exist" containerID="3a0f9b90f6d8855e669c270fa1947f5460d71bada5972eb0e994890c1823d862" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.908116 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a0f9b90f6d8855e669c270fa1947f5460d71bada5972eb0e994890c1823d862"} err="failed to get container status \"3a0f9b90f6d8855e669c270fa1947f5460d71bada5972eb0e994890c1823d862\": rpc error: code = NotFound desc = could not find container \"3a0f9b90f6d8855e669c270fa1947f5460d71bada5972eb0e994890c1823d862\": container with ID starting with 3a0f9b90f6d8855e669c270fa1947f5460d71bada5972eb0e994890c1823d862 not found: ID does not exist" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.908145 4981 scope.go:117] "RemoveContainer" containerID="c837b985a29893346264aa885325b28d426d273e03c56a877d6622d21ebb6394" Jan 28 15:25:09 crc kubenswrapper[4981]: E0128 15:25:09.908685 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c837b985a29893346264aa885325b28d426d273e03c56a877d6622d21ebb6394\": container with ID starting with c837b985a29893346264aa885325b28d426d273e03c56a877d6622d21ebb6394 not found: ID does not exist" containerID="c837b985a29893346264aa885325b28d426d273e03c56a877d6622d21ebb6394" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.908712 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c837b985a29893346264aa885325b28d426d273e03c56a877d6622d21ebb6394"} err="failed to get container status \"c837b985a29893346264aa885325b28d426d273e03c56a877d6622d21ebb6394\": rpc error: code = NotFound desc = could not find container \"c837b985a29893346264aa885325b28d426d273e03c56a877d6622d21ebb6394\": container with ID starting with c837b985a29893346264aa885325b28d426d273e03c56a877d6622d21ebb6394 not found: ID does not exist" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.908738 4981 scope.go:117] "RemoveContainer" containerID="45a1d31fe914071bf1e29dd9a450d7aa3a7ff8eae89c38ea213cefbeb7ea8e1b" Jan 28 15:25:09 crc kubenswrapper[4981]: E0128 15:25:09.908973 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45a1d31fe914071bf1e29dd9a450d7aa3a7ff8eae89c38ea213cefbeb7ea8e1b\": container with ID starting with 45a1d31fe914071bf1e29dd9a450d7aa3a7ff8eae89c38ea213cefbeb7ea8e1b not found: ID does not exist" containerID="45a1d31fe914071bf1e29dd9a450d7aa3a7ff8eae89c38ea213cefbeb7ea8e1b" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.908993 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45a1d31fe914071bf1e29dd9a450d7aa3a7ff8eae89c38ea213cefbeb7ea8e1b"} err="failed to get container status \"45a1d31fe914071bf1e29dd9a450d7aa3a7ff8eae89c38ea213cefbeb7ea8e1b\": rpc error: code = NotFound desc = could not find container \"45a1d31fe914071bf1e29dd9a450d7aa3a7ff8eae89c38ea213cefbeb7ea8e1b\": container with ID starting with 45a1d31fe914071bf1e29dd9a450d7aa3a7ff8eae89c38ea213cefbeb7ea8e1b not found: ID does not exist" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.909004 4981 scope.go:117] "RemoveContainer" containerID="a792d74426a84d1e8b782eb765fb9849d992317e6a398b23264f64a37feca00d" Jan 28 15:25:09 crc kubenswrapper[4981]: E0128 15:25:09.909464 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a792d74426a84d1e8b782eb765fb9849d992317e6a398b23264f64a37feca00d\": container with ID starting with a792d74426a84d1e8b782eb765fb9849d992317e6a398b23264f64a37feca00d not found: ID does not exist" containerID="a792d74426a84d1e8b782eb765fb9849d992317e6a398b23264f64a37feca00d" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.909484 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a792d74426a84d1e8b782eb765fb9849d992317e6a398b23264f64a37feca00d"} err="failed to get container status \"a792d74426a84d1e8b782eb765fb9849d992317e6a398b23264f64a37feca00d\": rpc error: code = NotFound desc = could not find container \"a792d74426a84d1e8b782eb765fb9849d992317e6a398b23264f64a37feca00d\": container with ID starting with a792d74426a84d1e8b782eb765fb9849d992317e6a398b23264f64a37feca00d not found: ID does not exist" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.972811 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.972860 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.972890 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd49ada-7fea-40ee-a0f5-06a153be11e3-run-httpd\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.972913 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxwqz\" (UniqueName: \"kubernetes.io/projected/8fd49ada-7fea-40ee-a0f5-06a153be11e3-kube-api-access-dxwqz\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.973091 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-config-data\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.973292 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-scripts\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.973518 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd49ada-7fea-40ee-a0f5-06a153be11e3-log-httpd\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:09 crc kubenswrapper[4981]: I0128 15:25:09.973603 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.076030 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-scripts\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.076221 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd49ada-7fea-40ee-a0f5-06a153be11e3-log-httpd\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.076319 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.076440 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.076498 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.076656 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd49ada-7fea-40ee-a0f5-06a153be11e3-run-httpd\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.076795 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd49ada-7fea-40ee-a0f5-06a153be11e3-log-httpd\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.077469 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd49ada-7fea-40ee-a0f5-06a153be11e3-run-httpd\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.077711 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxwqz\" (UniqueName: \"kubernetes.io/projected/8fd49ada-7fea-40ee-a0f5-06a153be11e3-kube-api-access-dxwqz\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.077796 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-config-data\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.080976 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-scripts\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.081636 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.084487 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-config-data\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.084582 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.088636 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.103268 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxwqz\" (UniqueName: \"kubernetes.io/projected/8fd49ada-7fea-40ee-a0f5-06a153be11e3-kube-api-access-dxwqz\") pod \"ceilometer-0\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.191387 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.691549 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:25:10 crc kubenswrapper[4981]: W0128 15:25:10.702524 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fd49ada_7fea_40ee_a0f5_06a153be11e3.slice/crio-6dc206c45b353d9242494688acbc6ad5272f4d41f1ca8cd2e27b3c5760393302 WatchSource:0}: Error finding container 6dc206c45b353d9242494688acbc6ad5272f4d41f1ca8cd2e27b3c5760393302: Status 404 returned error can't find the container with id 6dc206c45b353d9242494688acbc6ad5272f4d41f1ca8cd2e27b3c5760393302 Jan 28 15:25:10 crc kubenswrapper[4981]: I0128 15:25:10.759431 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fd49ada-7fea-40ee-a0f5-06a153be11e3","Type":"ContainerStarted","Data":"6dc206c45b353d9242494688acbc6ad5272f4d41f1ca8cd2e27b3c5760393302"} Jan 28 15:25:11 crc kubenswrapper[4981]: I0128 15:25:11.333542 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f75ac87-89fc-4468-abd3-7347faabc1dd" path="/var/lib/kubelet/pods/3f75ac87-89fc-4468-abd3-7347faabc1dd/volumes" Jan 28 15:25:11 crc kubenswrapper[4981]: I0128 15:25:11.775064 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fd49ada-7fea-40ee-a0f5-06a153be11e3","Type":"ContainerStarted","Data":"d55fec097929620b59df56d189bee6e0909e11b2fb663bba4e1ff86b80350472"} Jan 28 15:25:12 crc kubenswrapper[4981]: I0128 15:25:12.788309 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fd49ada-7fea-40ee-a0f5-06a153be11e3","Type":"ContainerStarted","Data":"d024d01bfeaa959a7d8f49bc4aa655b777debe15982487fec5973c8133c3b450"} Jan 28 15:25:13 crc kubenswrapper[4981]: I0128 15:25:13.811459 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fd49ada-7fea-40ee-a0f5-06a153be11e3","Type":"ContainerStarted","Data":"572bc345237996e7750d698beadba32cf4cd75e2e564414b42ce762c74f645db"} Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.841489 4981 generic.go:334] "Generic (PLEG): container finished" podID="dc7f2b64-003f-47e8-a3b0-c29cb1c47f55" containerID="f6f6c593ee94516385a1845445ae913d0e788745620c2357b1ef2fd9813f422e" exitCode=137 Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.842062 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55","Type":"ContainerDied","Data":"f6f6c593ee94516385a1845445ae913d0e788745620c2357b1ef2fd9813f422e"} Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.842089 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55","Type":"ContainerDied","Data":"c65f59ddaa528a162cd24a2beb0a0fb364275409b8ad0d7da4108834e9270395"} Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.842100 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c65f59ddaa528a162cd24a2beb0a0fb364275409b8ad0d7da4108834e9270395" Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.845018 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fd49ada-7fea-40ee-a0f5-06a153be11e3","Type":"ContainerStarted","Data":"2c46eee2a2b6467c6fd4e4b8c2ee763703c3c60fe3b462070a599e381150164a"} Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.845576 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.848326 4981 generic.go:334] "Generic (PLEG): container finished" podID="f9e5013b-9327-47dc-ac6f-1e749b59ca64" containerID="a52085f7bca4e361d587761cb7c7e82631dac1c8cf69167d22887f4f91c3b846" exitCode=137 Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.848241 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f9e5013b-9327-47dc-ac6f-1e749b59ca64","Type":"ContainerDied","Data":"a52085f7bca4e361d587761cb7c7e82631dac1c8cf69167d22887f4f91c3b846"} Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.848905 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f9e5013b-9327-47dc-ac6f-1e749b59ca64","Type":"ContainerDied","Data":"42b7783a957cf30c9fc619d2fd27b790a87f1bc2d18ad030db7f1edb23cc47dc"} Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.848918 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42b7783a957cf30c9fc619d2fd27b790a87f1bc2d18ad030db7f1edb23cc47dc" Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.850653 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.855782 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.865494 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5530001650000003 podStartE2EDuration="6.865479445s" podCreationTimestamp="2026-01-28 15:25:09 +0000 UTC" firstStartedPulling="2026-01-28 15:25:10.704984004 +0000 UTC m=+1322.157142245" lastFinishedPulling="2026-01-28 15:25:15.017463254 +0000 UTC m=+1326.469621525" observedRunningTime="2026-01-28 15:25:15.864257092 +0000 UTC m=+1327.316415333" watchObservedRunningTime="2026-01-28 15:25:15.865479445 +0000 UTC m=+1327.317637686" Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.925970 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e5013b-9327-47dc-ac6f-1e749b59ca64-combined-ca-bundle\") pod \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.926091 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9nbr\" (UniqueName: \"kubernetes.io/projected/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-kube-api-access-x9nbr\") pod \"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55\" (UID: \"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55\") " Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.926141 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-config-data\") pod \"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55\" (UID: \"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55\") " Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.926242 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9e5013b-9327-47dc-ac6f-1e749b59ca64-logs\") pod \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.926289 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e5013b-9327-47dc-ac6f-1e749b59ca64-config-data\") pod \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.926339 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-combined-ca-bundle\") pod \"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55\" (UID: \"dc7f2b64-003f-47e8-a3b0-c29cb1c47f55\") " Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.926415 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97gsw\" (UniqueName: \"kubernetes.io/projected/f9e5013b-9327-47dc-ac6f-1e749b59ca64-kube-api-access-97gsw\") pod \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\" (UID: \"f9e5013b-9327-47dc-ac6f-1e749b59ca64\") " Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.928316 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9e5013b-9327-47dc-ac6f-1e749b59ca64-logs" (OuterVolumeSpecName: "logs") pod "f9e5013b-9327-47dc-ac6f-1e749b59ca64" (UID: "f9e5013b-9327-47dc-ac6f-1e749b59ca64"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.931780 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9e5013b-9327-47dc-ac6f-1e749b59ca64-kube-api-access-97gsw" (OuterVolumeSpecName: "kube-api-access-97gsw") pod "f9e5013b-9327-47dc-ac6f-1e749b59ca64" (UID: "f9e5013b-9327-47dc-ac6f-1e749b59ca64"). InnerVolumeSpecName "kube-api-access-97gsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.933427 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-kube-api-access-x9nbr" (OuterVolumeSpecName: "kube-api-access-x9nbr") pod "dc7f2b64-003f-47e8-a3b0-c29cb1c47f55" (UID: "dc7f2b64-003f-47e8-a3b0-c29cb1c47f55"). InnerVolumeSpecName "kube-api-access-x9nbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.958269 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-config-data" (OuterVolumeSpecName: "config-data") pod "dc7f2b64-003f-47e8-a3b0-c29cb1c47f55" (UID: "dc7f2b64-003f-47e8-a3b0-c29cb1c47f55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.958593 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e5013b-9327-47dc-ac6f-1e749b59ca64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9e5013b-9327-47dc-ac6f-1e749b59ca64" (UID: "f9e5013b-9327-47dc-ac6f-1e749b59ca64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.960663 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9e5013b-9327-47dc-ac6f-1e749b59ca64-config-data" (OuterVolumeSpecName: "config-data") pod "f9e5013b-9327-47dc-ac6f-1e749b59ca64" (UID: "f9e5013b-9327-47dc-ac6f-1e749b59ca64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:15 crc kubenswrapper[4981]: I0128 15:25:15.964664 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc7f2b64-003f-47e8-a3b0-c29cb1c47f55" (UID: "dc7f2b64-003f-47e8-a3b0-c29cb1c47f55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.028408 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f9e5013b-9327-47dc-ac6f-1e749b59ca64-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.028438 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9e5013b-9327-47dc-ac6f-1e749b59ca64-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.028451 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.028465 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97gsw\" (UniqueName: \"kubernetes.io/projected/f9e5013b-9327-47dc-ac6f-1e749b59ca64-kube-api-access-97gsw\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.028479 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9e5013b-9327-47dc-ac6f-1e749b59ca64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.028491 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9nbr\" (UniqueName: \"kubernetes.io/projected/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-kube-api-access-x9nbr\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.028502 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.122406 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.861055 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.861081 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.972176 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.983499 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.991706 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 15:25:16 crc kubenswrapper[4981]: E0128 15:25:16.992182 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e5013b-9327-47dc-ac6f-1e749b59ca64" containerName="nova-metadata-log" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.992289 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e5013b-9327-47dc-ac6f-1e749b59ca64" containerName="nova-metadata-log" Jan 28 15:25:16 crc kubenswrapper[4981]: E0128 15:25:16.992309 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7f2b64-003f-47e8-a3b0-c29cb1c47f55" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.992317 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7f2b64-003f-47e8-a3b0-c29cb1c47f55" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 15:25:16 crc kubenswrapper[4981]: E0128 15:25:16.992341 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9e5013b-9327-47dc-ac6f-1e749b59ca64" containerName="nova-metadata-metadata" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.992347 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9e5013b-9327-47dc-ac6f-1e749b59ca64" containerName="nova-metadata-metadata" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.992565 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7f2b64-003f-47e8-a3b0-c29cb1c47f55" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.992586 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9e5013b-9327-47dc-ac6f-1e749b59ca64" containerName="nova-metadata-metadata" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.992608 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9e5013b-9327-47dc-ac6f-1e749b59ca64" containerName="nova-metadata-log" Jan 28 15:25:16 crc kubenswrapper[4981]: I0128 15:25:16.993342 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.001132 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.001502 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.001722 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.001881 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.010300 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.014572 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.060581 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.062430 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.064724 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.065636 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.099527 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.149688 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-config-data\") pod \"nova-metadata-0\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.149860 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7tx5\" (UniqueName: \"kubernetes.io/projected/a7a16e56-277e-47a2-91e2-21a8ec2976db-kube-api-access-r7tx5\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7a16e56-277e-47a2-91e2-21a8ec2976db\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.149920 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.149969 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.150033 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a16e56-277e-47a2-91e2-21a8ec2976db-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7a16e56-277e-47a2-91e2-21a8ec2976db\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.150075 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcpsv\" (UniqueName: \"kubernetes.io/projected/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-kube-api-access-pcpsv\") pod \"nova-metadata-0\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.150210 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-logs\") pod \"nova-metadata-0\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.150296 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a16e56-277e-47a2-91e2-21a8ec2976db-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7a16e56-277e-47a2-91e2-21a8ec2976db\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.150339 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a16e56-277e-47a2-91e2-21a8ec2976db-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7a16e56-277e-47a2-91e2-21a8ec2976db\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.150408 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a16e56-277e-47a2-91e2-21a8ec2976db-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7a16e56-277e-47a2-91e2-21a8ec2976db\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.252960 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-logs\") pod \"nova-metadata-0\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.253073 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a16e56-277e-47a2-91e2-21a8ec2976db-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7a16e56-277e-47a2-91e2-21a8ec2976db\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.253140 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a16e56-277e-47a2-91e2-21a8ec2976db-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7a16e56-277e-47a2-91e2-21a8ec2976db\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.253222 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a16e56-277e-47a2-91e2-21a8ec2976db-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7a16e56-277e-47a2-91e2-21a8ec2976db\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.253350 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-config-data\") pod \"nova-metadata-0\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.253409 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7tx5\" (UniqueName: \"kubernetes.io/projected/a7a16e56-277e-47a2-91e2-21a8ec2976db-kube-api-access-r7tx5\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7a16e56-277e-47a2-91e2-21a8ec2976db\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.253449 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.253483 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.253539 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a16e56-277e-47a2-91e2-21a8ec2976db-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7a16e56-277e-47a2-91e2-21a8ec2976db\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.253598 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcpsv\" (UniqueName: \"kubernetes.io/projected/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-kube-api-access-pcpsv\") pod \"nova-metadata-0\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.254494 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-logs\") pod \"nova-metadata-0\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.259159 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7a16e56-277e-47a2-91e2-21a8ec2976db-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7a16e56-277e-47a2-91e2-21a8ec2976db\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.259928 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.259942 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.260884 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-config-data\") pod \"nova-metadata-0\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.262354 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a16e56-277e-47a2-91e2-21a8ec2976db-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7a16e56-277e-47a2-91e2-21a8ec2976db\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.263635 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7a16e56-277e-47a2-91e2-21a8ec2976db-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7a16e56-277e-47a2-91e2-21a8ec2976db\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.273631 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7tx5\" (UniqueName: \"kubernetes.io/projected/a7a16e56-277e-47a2-91e2-21a8ec2976db-kube-api-access-r7tx5\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7a16e56-277e-47a2-91e2-21a8ec2976db\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.278030 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7a16e56-277e-47a2-91e2-21a8ec2976db-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"a7a16e56-277e-47a2-91e2-21a8ec2976db\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.284552 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcpsv\" (UniqueName: \"kubernetes.io/projected/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-kube-api-access-pcpsv\") pod \"nova-metadata-0\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.332500 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7f2b64-003f-47e8-a3b0-c29cb1c47f55" path="/var/lib/kubelet/pods/dc7f2b64-003f-47e8-a3b0-c29cb1c47f55/volumes" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.333265 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9e5013b-9327-47dc-ac6f-1e749b59ca64" path="/var/lib/kubelet/pods/f9e5013b-9327-47dc-ac6f-1e749b59ca64/volumes" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.351528 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.381457 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.858095 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.875825 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a7a16e56-277e-47a2-91e2-21a8ec2976db","Type":"ContainerStarted","Data":"4ebd53415ba2fdedf95fda2261537fb8968a4970a34a90eb0248c62d2b5c4032"} Jan 28 15:25:17 crc kubenswrapper[4981]: I0128 15:25:17.968897 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 15:25:17 crc kubenswrapper[4981]: W0128 15:25:17.994497 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc3fea1f_380b_4b3e_9d89_76a8ed8faaec.slice/crio-ac44632da930201e91d1c3b3243d29e28fb925b29681b9e8b5f1cc20747d53a0 WatchSource:0}: Error finding container ac44632da930201e91d1c3b3243d29e28fb925b29681b9e8b5f1cc20747d53a0: Status 404 returned error can't find the container with id ac44632da930201e91d1c3b3243d29e28fb925b29681b9e8b5f1cc20747d53a0 Jan 28 15:25:18 crc kubenswrapper[4981]: I0128 15:25:18.303156 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 15:25:18 crc kubenswrapper[4981]: I0128 15:25:18.303792 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 15:25:18 crc kubenswrapper[4981]: I0128 15:25:18.303992 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 15:25:18 crc kubenswrapper[4981]: I0128 15:25:18.311622 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 15:25:18 crc kubenswrapper[4981]: I0128 15:25:18.885224 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec","Type":"ContainerStarted","Data":"5649cb2f8dd03dae3217f1d3d27313ee2f50cb9a25e3c1daf85a7fabb692c250"} Jan 28 15:25:18 crc kubenswrapper[4981]: I0128 15:25:18.885268 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec","Type":"ContainerStarted","Data":"f32a0b7221383c4d4771a0c3634cf9d71481007a46cbbecea546acfc4cb82b31"} Jan 28 15:25:18 crc kubenswrapper[4981]: I0128 15:25:18.885283 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec","Type":"ContainerStarted","Data":"ac44632da930201e91d1c3b3243d29e28fb925b29681b9e8b5f1cc20747d53a0"} Jan 28 15:25:18 crc kubenswrapper[4981]: I0128 15:25:18.890287 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"a7a16e56-277e-47a2-91e2-21a8ec2976db","Type":"ContainerStarted","Data":"284ffa977391920cade2c4790d96589514584c790a88a735ab0d47aaf86b9a67"} Jan 28 15:25:18 crc kubenswrapper[4981]: I0128 15:25:18.890329 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 15:25:18 crc kubenswrapper[4981]: I0128 15:25:18.894633 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 15:25:18 crc kubenswrapper[4981]: I0128 15:25:18.926051 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.926028137 podStartE2EDuration="2.926028137s" podCreationTimestamp="2026-01-28 15:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:18.911325211 +0000 UTC m=+1330.363483492" watchObservedRunningTime="2026-01-28 15:25:18.926028137 +0000 UTC m=+1330.378186388" Jan 28 15:25:18 crc kubenswrapper[4981]: I0128 15:25:18.937168 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.937145976 podStartE2EDuration="2.937145976s" podCreationTimestamp="2026-01-28 15:25:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:18.933254171 +0000 UTC m=+1330.385412442" watchObservedRunningTime="2026-01-28 15:25:18.937145976 +0000 UTC m=+1330.389304227" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.131499 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-b4nl5"] Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.133037 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.148454 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-b4nl5"] Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.194510 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.195100 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.195266 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.195475 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.195618 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxlqm\" (UniqueName: \"kubernetes.io/projected/46b93ba2-9bbb-4754-8aee-7c588bc645de-kube-api-access-pxlqm\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.195790 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-config\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.297362 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.297404 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.297431 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.297476 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.297499 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxlqm\" (UniqueName: \"kubernetes.io/projected/46b93ba2-9bbb-4754-8aee-7c588bc645de-kube-api-access-pxlqm\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.297532 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-config\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.314136 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-config\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.314363 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-ovsdbserver-nb\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.314816 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-dns-swift-storage-0\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.327038 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-ovsdbserver-sb\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.327046 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-dns-svc\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.328327 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxlqm\" (UniqueName: \"kubernetes.io/projected/46b93ba2-9bbb-4754-8aee-7c588bc645de-kube-api-access-pxlqm\") pod \"dnsmasq-dns-59cf4bdb65-b4nl5\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.458070 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:19 crc kubenswrapper[4981]: I0128 15:25:19.967498 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-b4nl5"] Jan 28 15:25:20 crc kubenswrapper[4981]: I0128 15:25:20.910753 4981 generic.go:334] "Generic (PLEG): container finished" podID="46b93ba2-9bbb-4754-8aee-7c588bc645de" containerID="12b2ccac9229ad2a3652827c95682a8360888aa87c768c67f5e883c657ae937f" exitCode=0 Jan 28 15:25:20 crc kubenswrapper[4981]: I0128 15:25:20.911029 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" event={"ID":"46b93ba2-9bbb-4754-8aee-7c588bc645de","Type":"ContainerDied","Data":"12b2ccac9229ad2a3652827c95682a8360888aa87c768c67f5e883c657ae937f"} Jan 28 15:25:20 crc kubenswrapper[4981]: I0128 15:25:20.911371 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" event={"ID":"46b93ba2-9bbb-4754-8aee-7c588bc645de","Type":"ContainerStarted","Data":"66bc54a211c5dbd15f5dce7ba9cc7b75d4b4d2eda68e13f535c26106862a55dd"} Jan 28 15:25:21 crc kubenswrapper[4981]: I0128 15:25:21.714129 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:25:21 crc kubenswrapper[4981]: I0128 15:25:21.924390 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" event={"ID":"46b93ba2-9bbb-4754-8aee-7c588bc645de","Type":"ContainerStarted","Data":"11f0ace9333df29fd82dfbe7a5b5d48e8b15826e3a2403f07bcdfb865d5bc738"} Jan 28 15:25:21 crc kubenswrapper[4981]: I0128 15:25:21.924770 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:21 crc kubenswrapper[4981]: I0128 15:25:21.925121 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a2b0f8b7-65ba-459e-a209-7349db6a0ba2" containerName="nova-api-api" containerID="cri-o://cd5207b105c82a367d3c808f33c579d27a6b784d8c09d362993f3c1423f822da" gracePeriod=30 Jan 28 15:25:21 crc kubenswrapper[4981]: I0128 15:25:21.925155 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a2b0f8b7-65ba-459e-a209-7349db6a0ba2" containerName="nova-api-log" containerID="cri-o://9253d3964d74b93e45879373a6a1e9160b9ce2c0509ccd6c5dd1db2a6cf4e434" gracePeriod=30 Jan 28 15:25:21 crc kubenswrapper[4981]: I0128 15:25:21.973893 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" podStartSLOduration=2.973872237 podStartE2EDuration="2.973872237s" podCreationTimestamp="2026-01-28 15:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:21.966098958 +0000 UTC m=+1333.418257239" watchObservedRunningTime="2026-01-28 15:25:21.973872237 +0000 UTC m=+1333.426030478" Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.160479 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.160768 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="ceilometer-central-agent" containerID="cri-o://d55fec097929620b59df56d189bee6e0909e11b2fb663bba4e1ff86b80350472" gracePeriod=30 Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.160852 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="proxy-httpd" containerID="cri-o://2c46eee2a2b6467c6fd4e4b8c2ee763703c3c60fe3b462070a599e381150164a" gracePeriod=30 Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.160878 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="ceilometer-notification-agent" containerID="cri-o://d024d01bfeaa959a7d8f49bc4aa655b777debe15982487fec5973c8133c3b450" gracePeriod=30 Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.160875 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="sg-core" containerID="cri-o://572bc345237996e7750d698beadba32cf4cd75e2e564414b42ce762c74f645db" gracePeriod=30 Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.353088 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.382470 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.382561 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.933105 4981 generic.go:334] "Generic (PLEG): container finished" podID="a2b0f8b7-65ba-459e-a209-7349db6a0ba2" containerID="9253d3964d74b93e45879373a6a1e9160b9ce2c0509ccd6c5dd1db2a6cf4e434" exitCode=143 Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.933149 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b0f8b7-65ba-459e-a209-7349db6a0ba2","Type":"ContainerDied","Data":"9253d3964d74b93e45879373a6a1e9160b9ce2c0509ccd6c5dd1db2a6cf4e434"} Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.936323 4981 generic.go:334] "Generic (PLEG): container finished" podID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerID="2c46eee2a2b6467c6fd4e4b8c2ee763703c3c60fe3b462070a599e381150164a" exitCode=0 Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.936383 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fd49ada-7fea-40ee-a0f5-06a153be11e3","Type":"ContainerDied","Data":"2c46eee2a2b6467c6fd4e4b8c2ee763703c3c60fe3b462070a599e381150164a"} Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.936428 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fd49ada-7fea-40ee-a0f5-06a153be11e3","Type":"ContainerDied","Data":"572bc345237996e7750d698beadba32cf4cd75e2e564414b42ce762c74f645db"} Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.936385 4981 generic.go:334] "Generic (PLEG): container finished" podID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerID="572bc345237996e7750d698beadba32cf4cd75e2e564414b42ce762c74f645db" exitCode=2 Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.936480 4981 generic.go:334] "Generic (PLEG): container finished" podID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerID="d55fec097929620b59df56d189bee6e0909e11b2fb663bba4e1ff86b80350472" exitCode=0 Jan 28 15:25:22 crc kubenswrapper[4981]: I0128 15:25:22.936544 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fd49ada-7fea-40ee-a0f5-06a153be11e3","Type":"ContainerDied","Data":"d55fec097929620b59df56d189bee6e0909e11b2fb663bba4e1ff86b80350472"} Jan 28 15:25:23 crc kubenswrapper[4981]: I0128 15:25:23.962327 4981 generic.go:334] "Generic (PLEG): container finished" podID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerID="d024d01bfeaa959a7d8f49bc4aa655b777debe15982487fec5973c8133c3b450" exitCode=0 Jan 28 15:25:23 crc kubenswrapper[4981]: I0128 15:25:23.962375 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fd49ada-7fea-40ee-a0f5-06a153be11e3","Type":"ContainerDied","Data":"d024d01bfeaa959a7d8f49bc4aa655b777debe15982487fec5973c8133c3b450"} Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.197033 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.297776 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-ceilometer-tls-certs\") pod \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.297872 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-config-data\") pod \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.297917 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxwqz\" (UniqueName: \"kubernetes.io/projected/8fd49ada-7fea-40ee-a0f5-06a153be11e3-kube-api-access-dxwqz\") pod \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.298011 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-sg-core-conf-yaml\") pod \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.298048 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-combined-ca-bundle\") pod \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.298088 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd49ada-7fea-40ee-a0f5-06a153be11e3-log-httpd\") pod \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.298139 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-scripts\") pod \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.298177 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd49ada-7fea-40ee-a0f5-06a153be11e3-run-httpd\") pod \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\" (UID: \"8fd49ada-7fea-40ee-a0f5-06a153be11e3\") " Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.298589 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fd49ada-7fea-40ee-a0f5-06a153be11e3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8fd49ada-7fea-40ee-a0f5-06a153be11e3" (UID: "8fd49ada-7fea-40ee-a0f5-06a153be11e3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.298717 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fd49ada-7fea-40ee-a0f5-06a153be11e3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8fd49ada-7fea-40ee-a0f5-06a153be11e3" (UID: "8fd49ada-7fea-40ee-a0f5-06a153be11e3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.305425 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-scripts" (OuterVolumeSpecName: "scripts") pod "8fd49ada-7fea-40ee-a0f5-06a153be11e3" (UID: "8fd49ada-7fea-40ee-a0f5-06a153be11e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.305491 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fd49ada-7fea-40ee-a0f5-06a153be11e3-kube-api-access-dxwqz" (OuterVolumeSpecName: "kube-api-access-dxwqz") pod "8fd49ada-7fea-40ee-a0f5-06a153be11e3" (UID: "8fd49ada-7fea-40ee-a0f5-06a153be11e3"). InnerVolumeSpecName "kube-api-access-dxwqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.332715 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8fd49ada-7fea-40ee-a0f5-06a153be11e3" (UID: "8fd49ada-7fea-40ee-a0f5-06a153be11e3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.369925 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "8fd49ada-7fea-40ee-a0f5-06a153be11e3" (UID: "8fd49ada-7fea-40ee-a0f5-06a153be11e3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.390800 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fd49ada-7fea-40ee-a0f5-06a153be11e3" (UID: "8fd49ada-7fea-40ee-a0f5-06a153be11e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.400177 4981 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.400228 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxwqz\" (UniqueName: \"kubernetes.io/projected/8fd49ada-7fea-40ee-a0f5-06a153be11e3-kube-api-access-dxwqz\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.400239 4981 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.400249 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.400258 4981 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd49ada-7fea-40ee-a0f5-06a153be11e3-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.400266 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.400275 4981 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8fd49ada-7fea-40ee-a0f5-06a153be11e3-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.417402 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-config-data" (OuterVolumeSpecName: "config-data") pod "8fd49ada-7fea-40ee-a0f5-06a153be11e3" (UID: "8fd49ada-7fea-40ee-a0f5-06a153be11e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.502607 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fd49ada-7fea-40ee-a0f5-06a153be11e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.986658 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8fd49ada-7fea-40ee-a0f5-06a153be11e3","Type":"ContainerDied","Data":"6dc206c45b353d9242494688acbc6ad5272f4d41f1ca8cd2e27b3c5760393302"} Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.986760 4981 scope.go:117] "RemoveContainer" containerID="2c46eee2a2b6467c6fd4e4b8c2ee763703c3c60fe3b462070a599e381150164a" Jan 28 15:25:24 crc kubenswrapper[4981]: I0128 15:25:24.986868 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.020584 4981 scope.go:117] "RemoveContainer" containerID="572bc345237996e7750d698beadba32cf4cd75e2e564414b42ce762c74f645db" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.043995 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.054643 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.073266 4981 scope.go:117] "RemoveContainer" containerID="d024d01bfeaa959a7d8f49bc4aa655b777debe15982487fec5973c8133c3b450" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.077790 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:25:25 crc kubenswrapper[4981]: E0128 15:25:25.078294 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="ceilometer-notification-agent" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.078321 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="ceilometer-notification-agent" Jan 28 15:25:25 crc kubenswrapper[4981]: E0128 15:25:25.078364 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="proxy-httpd" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.078374 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="proxy-httpd" Jan 28 15:25:25 crc kubenswrapper[4981]: E0128 15:25:25.078398 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="ceilometer-central-agent" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.078406 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="ceilometer-central-agent" Jan 28 15:25:25 crc kubenswrapper[4981]: E0128 15:25:25.078419 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="sg-core" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.078426 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="sg-core" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.078639 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="ceilometer-central-agent" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.078656 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="sg-core" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.078682 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="proxy-httpd" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.078694 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" containerName="ceilometer-notification-agent" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.080953 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.083997 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.085495 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.085617 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.090665 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.096129 4981 scope.go:117] "RemoveContainer" containerID="d55fec097929620b59df56d189bee6e0909e11b2fb663bba4e1ff86b80350472" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.216296 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfb88da5-80c7-481b-89ba-2c5c08c258c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.216638 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb88da5-80c7-481b-89ba-2c5c08c258c0-run-httpd\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.216687 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfb88da5-80c7-481b-89ba-2c5c08c258c0-config-data\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.216738 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb88da5-80c7-481b-89ba-2c5c08c258c0-log-httpd\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.216772 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfb88da5-80c7-481b-89ba-2c5c08c258c0-scripts\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.216819 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfb88da5-80c7-481b-89ba-2c5c08c258c0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.216854 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfb88da5-80c7-481b-89ba-2c5c08c258c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.216919 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdxkd\" (UniqueName: \"kubernetes.io/projected/bfb88da5-80c7-481b-89ba-2c5c08c258c0-kube-api-access-qdxkd\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.318917 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb88da5-80c7-481b-89ba-2c5c08c258c0-log-httpd\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.318990 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfb88da5-80c7-481b-89ba-2c5c08c258c0-scripts\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.319050 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfb88da5-80c7-481b-89ba-2c5c08c258c0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.319079 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfb88da5-80c7-481b-89ba-2c5c08c258c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.319149 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdxkd\" (UniqueName: \"kubernetes.io/projected/bfb88da5-80c7-481b-89ba-2c5c08c258c0-kube-api-access-qdxkd\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.319215 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfb88da5-80c7-481b-89ba-2c5c08c258c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.319244 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb88da5-80c7-481b-89ba-2c5c08c258c0-run-httpd\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.319295 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfb88da5-80c7-481b-89ba-2c5c08c258c0-config-data\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.319509 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb88da5-80c7-481b-89ba-2c5c08c258c0-log-httpd\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.320223 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bfb88da5-80c7-481b-89ba-2c5c08c258c0-run-httpd\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.324648 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bfb88da5-80c7-481b-89ba-2c5c08c258c0-scripts\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.325253 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bfb88da5-80c7-481b-89ba-2c5c08c258c0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.331171 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fd49ada-7fea-40ee-a0f5-06a153be11e3" path="/var/lib/kubelet/pods/8fd49ada-7fea-40ee-a0f5-06a153be11e3/volumes" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.332610 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bfb88da5-80c7-481b-89ba-2c5c08c258c0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.334029 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/bfb88da5-80c7-481b-89ba-2c5c08c258c0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.344979 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdxkd\" (UniqueName: \"kubernetes.io/projected/bfb88da5-80c7-481b-89ba-2c5c08c258c0-kube-api-access-qdxkd\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.345283 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bfb88da5-80c7-481b-89ba-2c5c08c258c0-config-data\") pod \"ceilometer-0\" (UID: \"bfb88da5-80c7-481b-89ba-2c5c08c258c0\") " pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.455910 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.504355 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.523419 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-combined-ca-bundle\") pod \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.523599 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-config-data\") pod \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.523619 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-logs\") pod \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.523675 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scm9w\" (UniqueName: \"kubernetes.io/projected/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-kube-api-access-scm9w\") pod \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\" (UID: \"a2b0f8b7-65ba-459e-a209-7349db6a0ba2\") " Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.525546 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-logs" (OuterVolumeSpecName: "logs") pod "a2b0f8b7-65ba-459e-a209-7349db6a0ba2" (UID: "a2b0f8b7-65ba-459e-a209-7349db6a0ba2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.534048 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-kube-api-access-scm9w" (OuterVolumeSpecName: "kube-api-access-scm9w") pod "a2b0f8b7-65ba-459e-a209-7349db6a0ba2" (UID: "a2b0f8b7-65ba-459e-a209-7349db6a0ba2"). InnerVolumeSpecName "kube-api-access-scm9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.571058 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2b0f8b7-65ba-459e-a209-7349db6a0ba2" (UID: "a2b0f8b7-65ba-459e-a209-7349db6a0ba2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.606374 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-config-data" (OuterVolumeSpecName: "config-data") pod "a2b0f8b7-65ba-459e-a209-7349db6a0ba2" (UID: "a2b0f8b7-65ba-459e-a209-7349db6a0ba2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.625581 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.625628 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.625641 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.625652 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scm9w\" (UniqueName: \"kubernetes.io/projected/a2b0f8b7-65ba-459e-a209-7349db6a0ba2-kube-api-access-scm9w\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.998726 4981 generic.go:334] "Generic (PLEG): container finished" podID="a2b0f8b7-65ba-459e-a209-7349db6a0ba2" containerID="cd5207b105c82a367d3c808f33c579d27a6b784d8c09d362993f3c1423f822da" exitCode=0 Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.998769 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b0f8b7-65ba-459e-a209-7349db6a0ba2","Type":"ContainerDied","Data":"cd5207b105c82a367d3c808f33c579d27a6b784d8c09d362993f3c1423f822da"} Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.998796 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2b0f8b7-65ba-459e-a209-7349db6a0ba2","Type":"ContainerDied","Data":"dd311ec3c811468eae5e322ded8a23ac2e77ec77ea7dd43c7e9280b7842b13e6"} Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.998812 4981 scope.go:117] "RemoveContainer" containerID="cd5207b105c82a367d3c808f33c579d27a6b784d8c09d362993f3c1423f822da" Jan 28 15:25:25 crc kubenswrapper[4981]: I0128 15:25:25.998848 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.024472 4981 scope.go:117] "RemoveContainer" containerID="9253d3964d74b93e45879373a6a1e9160b9ce2c0509ccd6c5dd1db2a6cf4e434" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.045018 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.053317 4981 scope.go:117] "RemoveContainer" containerID="cd5207b105c82a367d3c808f33c579d27a6b784d8c09d362993f3c1423f822da" Jan 28 15:25:26 crc kubenswrapper[4981]: E0128 15:25:26.055574 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd5207b105c82a367d3c808f33c579d27a6b784d8c09d362993f3c1423f822da\": container with ID starting with cd5207b105c82a367d3c808f33c579d27a6b784d8c09d362993f3c1423f822da not found: ID does not exist" containerID="cd5207b105c82a367d3c808f33c579d27a6b784d8c09d362993f3c1423f822da" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.055623 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd5207b105c82a367d3c808f33c579d27a6b784d8c09d362993f3c1423f822da"} err="failed to get container status \"cd5207b105c82a367d3c808f33c579d27a6b784d8c09d362993f3c1423f822da\": rpc error: code = NotFound desc = could not find container \"cd5207b105c82a367d3c808f33c579d27a6b784d8c09d362993f3c1423f822da\": container with ID starting with cd5207b105c82a367d3c808f33c579d27a6b784d8c09d362993f3c1423f822da not found: ID does not exist" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.055656 4981 scope.go:117] "RemoveContainer" containerID="9253d3964d74b93e45879373a6a1e9160b9ce2c0509ccd6c5dd1db2a6cf4e434" Jan 28 15:25:26 crc kubenswrapper[4981]: E0128 15:25:26.055970 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9253d3964d74b93e45879373a6a1e9160b9ce2c0509ccd6c5dd1db2a6cf4e434\": container with ID starting with 9253d3964d74b93e45879373a6a1e9160b9ce2c0509ccd6c5dd1db2a6cf4e434 not found: ID does not exist" containerID="9253d3964d74b93e45879373a6a1e9160b9ce2c0509ccd6c5dd1db2a6cf4e434" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.056003 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9253d3964d74b93e45879373a6a1e9160b9ce2c0509ccd6c5dd1db2a6cf4e434"} err="failed to get container status \"9253d3964d74b93e45879373a6a1e9160b9ce2c0509ccd6c5dd1db2a6cf4e434\": rpc error: code = NotFound desc = could not find container \"9253d3964d74b93e45879373a6a1e9160b9ce2c0509ccd6c5dd1db2a6cf4e434\": container with ID starting with 9253d3964d74b93e45879373a6a1e9160b9ce2c0509ccd6c5dd1db2a6cf4e434 not found: ID does not exist" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.062350 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.078022 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 15:25:26 crc kubenswrapper[4981]: E0128 15:25:26.078496 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b0f8b7-65ba-459e-a209-7349db6a0ba2" containerName="nova-api-api" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.078516 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b0f8b7-65ba-459e-a209-7349db6a0ba2" containerName="nova-api-api" Jan 28 15:25:26 crc kubenswrapper[4981]: E0128 15:25:26.078555 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2b0f8b7-65ba-459e-a209-7349db6a0ba2" containerName="nova-api-log" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.078561 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2b0f8b7-65ba-459e-a209-7349db6a0ba2" containerName="nova-api-log" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.078721 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b0f8b7-65ba-459e-a209-7349db6a0ba2" containerName="nova-api-log" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.078742 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2b0f8b7-65ba-459e-a209-7349db6a0ba2" containerName="nova-api-api" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.079712 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.082169 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.083893 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.084079 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.088101 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.090513 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.100152 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.138677 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32ed80a8-9456-457a-8f46-f51a17f5a7a3-logs\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.138756 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52l8f\" (UniqueName: \"kubernetes.io/projected/32ed80a8-9456-457a-8f46-f51a17f5a7a3-kube-api-access-52l8f\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.138864 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-public-tls-certs\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.138970 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.139008 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.139098 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-config-data\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.240791 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.240890 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-config-data\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.240962 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32ed80a8-9456-457a-8f46-f51a17f5a7a3-logs\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.241028 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52l8f\" (UniqueName: \"kubernetes.io/projected/32ed80a8-9456-457a-8f46-f51a17f5a7a3-kube-api-access-52l8f\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.241056 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-public-tls-certs\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.241086 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.241627 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32ed80a8-9456-457a-8f46-f51a17f5a7a3-logs\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.247166 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-public-tls-certs\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.255809 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-config-data\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.255940 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.256559 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.263473 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52l8f\" (UniqueName: \"kubernetes.io/projected/32ed80a8-9456-457a-8f46-f51a17f5a7a3-kube-api-access-52l8f\") pod \"nova-api-0\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.443139 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 15:25:26 crc kubenswrapper[4981]: I0128 15:25:26.915841 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:25:26 crc kubenswrapper[4981]: W0128 15:25:26.921487 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32ed80a8_9456_457a_8f46_f51a17f5a7a3.slice/crio-c93f3a5615b7be5d1c209a3201f582a738dfae9cdb34834509429d94ef039d23 WatchSource:0}: Error finding container c93f3a5615b7be5d1c209a3201f582a738dfae9cdb34834509429d94ef039d23: Status 404 returned error can't find the container with id c93f3a5615b7be5d1c209a3201f582a738dfae9cdb34834509429d94ef039d23 Jan 28 15:25:27 crc kubenswrapper[4981]: I0128 15:25:27.008357 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfb88da5-80c7-481b-89ba-2c5c08c258c0","Type":"ContainerStarted","Data":"f6c5300e3fc86183e88251c3b28f40f912e045b10a15cb734cb9a14e52640340"} Jan 28 15:25:27 crc kubenswrapper[4981]: I0128 15:25:27.012095 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32ed80a8-9456-457a-8f46-f51a17f5a7a3","Type":"ContainerStarted","Data":"c93f3a5615b7be5d1c209a3201f582a738dfae9cdb34834509429d94ef039d23"} Jan 28 15:25:27 crc kubenswrapper[4981]: I0128 15:25:27.337921 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b0f8b7-65ba-459e-a209-7349db6a0ba2" path="/var/lib/kubelet/pods/a2b0f8b7-65ba-459e-a209-7349db6a0ba2/volumes" Jan 28 15:25:27 crc kubenswrapper[4981]: I0128 15:25:27.365354 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:27 crc kubenswrapper[4981]: I0128 15:25:27.382346 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 15:25:27 crc kubenswrapper[4981]: I0128 15:25:27.382919 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 15:25:27 crc kubenswrapper[4981]: I0128 15:25:27.387289 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.028723 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfb88da5-80c7-481b-89ba-2c5c08c258c0","Type":"ContainerStarted","Data":"2afe6d4d255b930c59ecd8f1401e7a096862f2dda55b3af2852f60e4e5e71a73"} Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.029673 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfb88da5-80c7-481b-89ba-2c5c08c258c0","Type":"ContainerStarted","Data":"8639b5b56532cea281f185ffd12fcf7629fd52eae5e6452c50e9408fcf02f5ad"} Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.032342 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32ed80a8-9456-457a-8f46-f51a17f5a7a3","Type":"ContainerStarted","Data":"fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd"} Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.032413 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32ed80a8-9456-457a-8f46-f51a17f5a7a3","Type":"ContainerStarted","Data":"d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e"} Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.051256 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.059693 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.059670681 podStartE2EDuration="2.059670681s" podCreationTimestamp="2026-01-28 15:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:28.058778297 +0000 UTC m=+1339.510936578" watchObservedRunningTime="2026-01-28 15:25:28.059670681 +0000 UTC m=+1339.511828972" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.217131 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-chx2v"] Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.218367 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.219999 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.221867 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.224201 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-chx2v"] Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.296400 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-config-data\") pod \"nova-cell1-cell-mapping-chx2v\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.296461 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-scripts\") pod \"nova-cell1-cell-mapping-chx2v\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.296492 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-chx2v\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.296537 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pf76\" (UniqueName: \"kubernetes.io/projected/494b1558-721b-44b1-a712-7bb60eb3f5cc-kube-api-access-8pf76\") pod \"nova-cell1-cell-mapping-chx2v\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.398348 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-config-data\") pod \"nova-cell1-cell-mapping-chx2v\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.398720 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-scripts\") pod \"nova-cell1-cell-mapping-chx2v\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.398774 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-chx2v\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.398851 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pf76\" (UniqueName: \"kubernetes.io/projected/494b1558-721b-44b1-a712-7bb60eb3f5cc-kube-api-access-8pf76\") pod \"nova-cell1-cell-mapping-chx2v\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.404391 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-chx2v\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.404495 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-scripts\") pod \"nova-cell1-cell-mapping-chx2v\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.417909 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-config-data\") pod \"nova-cell1-cell-mapping-chx2v\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.423802 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pf76\" (UniqueName: \"kubernetes.io/projected/494b1558-721b-44b1-a712-7bb60eb3f5cc-kube-api-access-8pf76\") pod \"nova-cell1-cell-mapping-chx2v\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.432350 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.432431 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:28 crc kubenswrapper[4981]: I0128 15:25:28.616097 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:29 crc kubenswrapper[4981]: I0128 15:25:29.045882 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfb88da5-80c7-481b-89ba-2c5c08c258c0","Type":"ContainerStarted","Data":"45ec080ae92931f6f2ed099df0425ca97f51be3990cb6d57b7906b6075b162ea"} Jan 28 15:25:29 crc kubenswrapper[4981]: I0128 15:25:29.098941 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-chx2v"] Jan 28 15:25:29 crc kubenswrapper[4981]: W0128 15:25:29.102717 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod494b1558_721b_44b1_a712_7bb60eb3f5cc.slice/crio-4d7981ae97dffd493b17d21b0563bc9c391b87a29a13f1b0cbe2f61041e3bed6 WatchSource:0}: Error finding container 4d7981ae97dffd493b17d21b0563bc9c391b87a29a13f1b0cbe2f61041e3bed6: Status 404 returned error can't find the container with id 4d7981ae97dffd493b17d21b0563bc9c391b87a29a13f1b0cbe2f61041e3bed6 Jan 28 15:25:29 crc kubenswrapper[4981]: I0128 15:25:29.460402 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:25:29 crc kubenswrapper[4981]: I0128 15:25:29.540408 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lwm9x"] Jan 28 15:25:29 crc kubenswrapper[4981]: I0128 15:25:29.540699 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" podUID="27c4ebb6-cd4b-4021-a139-49536ce42763" containerName="dnsmasq-dns" containerID="cri-o://4915f6b2b72ae1a6f2341636c6e8f15fa9d9817eb38b3630bcf8a483706cb17f" gracePeriod=10 Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.057529 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-chx2v" event={"ID":"494b1558-721b-44b1-a712-7bb60eb3f5cc","Type":"ContainerStarted","Data":"e28f2fd3c48141efba819eaa185c1dffd30e0418be9e506fedebec5eab1162aa"} Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.057872 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-chx2v" event={"ID":"494b1558-721b-44b1-a712-7bb60eb3f5cc","Type":"ContainerStarted","Data":"4d7981ae97dffd493b17d21b0563bc9c391b87a29a13f1b0cbe2f61041e3bed6"} Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.061454 4981 generic.go:334] "Generic (PLEG): container finished" podID="27c4ebb6-cd4b-4021-a139-49536ce42763" containerID="4915f6b2b72ae1a6f2341636c6e8f15fa9d9817eb38b3630bcf8a483706cb17f" exitCode=0 Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.061497 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" event={"ID":"27c4ebb6-cd4b-4021-a139-49536ce42763","Type":"ContainerDied","Data":"4915f6b2b72ae1a6f2341636c6e8f15fa9d9817eb38b3630bcf8a483706cb17f"} Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.077805 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-chx2v" podStartSLOduration=2.077785649 podStartE2EDuration="2.077785649s" podCreationTimestamp="2026-01-28 15:25:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:30.073451783 +0000 UTC m=+1341.525610024" watchObservedRunningTime="2026-01-28 15:25:30.077785649 +0000 UTC m=+1341.529943890" Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.166819 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.234532 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-config\") pod \"27c4ebb6-cd4b-4021-a139-49536ce42763\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.234624 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-dns-swift-storage-0\") pod \"27c4ebb6-cd4b-4021-a139-49536ce42763\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.234713 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-dns-svc\") pod \"27c4ebb6-cd4b-4021-a139-49536ce42763\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.234783 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58rx7\" (UniqueName: \"kubernetes.io/projected/27c4ebb6-cd4b-4021-a139-49536ce42763-kube-api-access-58rx7\") pod \"27c4ebb6-cd4b-4021-a139-49536ce42763\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.234847 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-ovsdbserver-nb\") pod \"27c4ebb6-cd4b-4021-a139-49536ce42763\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.234926 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-ovsdbserver-sb\") pod \"27c4ebb6-cd4b-4021-a139-49536ce42763\" (UID: \"27c4ebb6-cd4b-4021-a139-49536ce42763\") " Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.254747 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27c4ebb6-cd4b-4021-a139-49536ce42763-kube-api-access-58rx7" (OuterVolumeSpecName: "kube-api-access-58rx7") pod "27c4ebb6-cd4b-4021-a139-49536ce42763" (UID: "27c4ebb6-cd4b-4021-a139-49536ce42763"). InnerVolumeSpecName "kube-api-access-58rx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.314326 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "27c4ebb6-cd4b-4021-a139-49536ce42763" (UID: "27c4ebb6-cd4b-4021-a139-49536ce42763"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.314337 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-config" (OuterVolumeSpecName: "config") pod "27c4ebb6-cd4b-4021-a139-49536ce42763" (UID: "27c4ebb6-cd4b-4021-a139-49536ce42763"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.314883 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "27c4ebb6-cd4b-4021-a139-49536ce42763" (UID: "27c4ebb6-cd4b-4021-a139-49536ce42763"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.328053 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "27c4ebb6-cd4b-4021-a139-49536ce42763" (UID: "27c4ebb6-cd4b-4021-a139-49536ce42763"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.336747 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.336774 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.336783 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.336794 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.336803 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58rx7\" (UniqueName: \"kubernetes.io/projected/27c4ebb6-cd4b-4021-a139-49536ce42763-kube-api-access-58rx7\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.349576 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "27c4ebb6-cd4b-4021-a139-49536ce42763" (UID: "27c4ebb6-cd4b-4021-a139-49536ce42763"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:25:30 crc kubenswrapper[4981]: I0128 15:25:30.438099 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27c4ebb6-cd4b-4021-a139-49536ce42763-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:31 crc kubenswrapper[4981]: I0128 15:25:31.078385 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bfb88da5-80c7-481b-89ba-2c5c08c258c0","Type":"ContainerStarted","Data":"ff77f2bc622c1857c7a603e5aab6a594babe10c1fae8c54768cdb5a945ee2620"} Jan 28 15:25:31 crc kubenswrapper[4981]: I0128 15:25:31.078956 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 15:25:31 crc kubenswrapper[4981]: I0128 15:25:31.081276 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" event={"ID":"27c4ebb6-cd4b-4021-a139-49536ce42763","Type":"ContainerDied","Data":"a4e546bc0330859c7f8be57b2689c67ffc9af1f1f7c572dcfbfb13cd0eb265df"} Jan 28 15:25:31 crc kubenswrapper[4981]: I0128 15:25:31.081304 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-845d6d6f59-lwm9x" Jan 28 15:25:31 crc kubenswrapper[4981]: I0128 15:25:31.081346 4981 scope.go:117] "RemoveContainer" containerID="4915f6b2b72ae1a6f2341636c6e8f15fa9d9817eb38b3630bcf8a483706cb17f" Jan 28 15:25:31 crc kubenswrapper[4981]: I0128 15:25:31.111523 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.56342713 podStartE2EDuration="6.111499757s" podCreationTimestamp="2026-01-28 15:25:25 +0000 UTC" firstStartedPulling="2026-01-28 15:25:26.087799345 +0000 UTC m=+1337.539957596" lastFinishedPulling="2026-01-28 15:25:30.635871972 +0000 UTC m=+1342.088030223" observedRunningTime="2026-01-28 15:25:31.109689709 +0000 UTC m=+1342.561847950" watchObservedRunningTime="2026-01-28 15:25:31.111499757 +0000 UTC m=+1342.563658008" Jan 28 15:25:31 crc kubenswrapper[4981]: I0128 15:25:31.144958 4981 scope.go:117] "RemoveContainer" containerID="35351ce6ec954a2c80601589a7bd641a5a6fb215cbe11640cc8473d570dfab66" Jan 28 15:25:31 crc kubenswrapper[4981]: I0128 15:25:31.156329 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lwm9x"] Jan 28 15:25:31 crc kubenswrapper[4981]: I0128 15:25:31.163949 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-845d6d6f59-lwm9x"] Jan 28 15:25:31 crc kubenswrapper[4981]: I0128 15:25:31.352110 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27c4ebb6-cd4b-4021-a139-49536ce42763" path="/var/lib/kubelet/pods/27c4ebb6-cd4b-4021-a139-49536ce42763/volumes" Jan 28 15:25:34 crc kubenswrapper[4981]: I0128 15:25:34.132639 4981 generic.go:334] "Generic (PLEG): container finished" podID="494b1558-721b-44b1-a712-7bb60eb3f5cc" containerID="e28f2fd3c48141efba819eaa185c1dffd30e0418be9e506fedebec5eab1162aa" exitCode=0 Jan 28 15:25:34 crc kubenswrapper[4981]: I0128 15:25:34.132706 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-chx2v" event={"ID":"494b1558-721b-44b1-a712-7bb60eb3f5cc","Type":"ContainerDied","Data":"e28f2fd3c48141efba819eaa185c1dffd30e0418be9e506fedebec5eab1162aa"} Jan 28 15:25:35 crc kubenswrapper[4981]: I0128 15:25:35.626691 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:35 crc kubenswrapper[4981]: I0128 15:25:35.776490 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-combined-ca-bundle\") pod \"494b1558-721b-44b1-a712-7bb60eb3f5cc\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " Jan 28 15:25:35 crc kubenswrapper[4981]: I0128 15:25:35.776603 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-config-data\") pod \"494b1558-721b-44b1-a712-7bb60eb3f5cc\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " Jan 28 15:25:35 crc kubenswrapper[4981]: I0128 15:25:35.776648 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pf76\" (UniqueName: \"kubernetes.io/projected/494b1558-721b-44b1-a712-7bb60eb3f5cc-kube-api-access-8pf76\") pod \"494b1558-721b-44b1-a712-7bb60eb3f5cc\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " Jan 28 15:25:35 crc kubenswrapper[4981]: I0128 15:25:35.776761 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-scripts\") pod \"494b1558-721b-44b1-a712-7bb60eb3f5cc\" (UID: \"494b1558-721b-44b1-a712-7bb60eb3f5cc\") " Jan 28 15:25:35 crc kubenswrapper[4981]: I0128 15:25:35.782663 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/494b1558-721b-44b1-a712-7bb60eb3f5cc-kube-api-access-8pf76" (OuterVolumeSpecName: "kube-api-access-8pf76") pod "494b1558-721b-44b1-a712-7bb60eb3f5cc" (UID: "494b1558-721b-44b1-a712-7bb60eb3f5cc"). InnerVolumeSpecName "kube-api-access-8pf76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:35 crc kubenswrapper[4981]: I0128 15:25:35.783440 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-scripts" (OuterVolumeSpecName: "scripts") pod "494b1558-721b-44b1-a712-7bb60eb3f5cc" (UID: "494b1558-721b-44b1-a712-7bb60eb3f5cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:35 crc kubenswrapper[4981]: I0128 15:25:35.801873 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-config-data" (OuterVolumeSpecName: "config-data") pod "494b1558-721b-44b1-a712-7bb60eb3f5cc" (UID: "494b1558-721b-44b1-a712-7bb60eb3f5cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:35 crc kubenswrapper[4981]: I0128 15:25:35.824694 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "494b1558-721b-44b1-a712-7bb60eb3f5cc" (UID: "494b1558-721b-44b1-a712-7bb60eb3f5cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:35 crc kubenswrapper[4981]: I0128 15:25:35.879285 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:35 crc kubenswrapper[4981]: I0128 15:25:35.879315 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:35 crc kubenswrapper[4981]: I0128 15:25:35.879324 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pf76\" (UniqueName: \"kubernetes.io/projected/494b1558-721b-44b1-a712-7bb60eb3f5cc-kube-api-access-8pf76\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:35 crc kubenswrapper[4981]: I0128 15:25:35.879335 4981 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/494b1558-721b-44b1-a712-7bb60eb3f5cc-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:36 crc kubenswrapper[4981]: I0128 15:25:36.156241 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-chx2v" event={"ID":"494b1558-721b-44b1-a712-7bb60eb3f5cc","Type":"ContainerDied","Data":"4d7981ae97dffd493b17d21b0563bc9c391b87a29a13f1b0cbe2f61041e3bed6"} Jan 28 15:25:36 crc kubenswrapper[4981]: I0128 15:25:36.156779 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d7981ae97dffd493b17d21b0563bc9c391b87a29a13f1b0cbe2f61041e3bed6" Jan 28 15:25:36 crc kubenswrapper[4981]: I0128 15:25:36.156309 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-chx2v" Jan 28 15:25:36 crc kubenswrapper[4981]: I0128 15:25:36.364496 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:25:36 crc kubenswrapper[4981]: I0128 15:25:36.365144 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="32ed80a8-9456-457a-8f46-f51a17f5a7a3" containerName="nova-api-log" containerID="cri-o://d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e" gracePeriod=30 Jan 28 15:25:36 crc kubenswrapper[4981]: I0128 15:25:36.365218 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="32ed80a8-9456-457a-8f46-f51a17f5a7a3" containerName="nova-api-api" containerID="cri-o://fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd" gracePeriod=30 Jan 28 15:25:36 crc kubenswrapper[4981]: I0128 15:25:36.376720 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 15:25:36 crc kubenswrapper[4981]: I0128 15:25:36.376969 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c99460aa-0162-4b6e-ad22-5f99def5df5e" containerName="nova-scheduler-scheduler" containerID="cri-o://962112e0cf7495f9ca2267b48bc4c1ba4abbbea396f47ad9597532c95522bda8" gracePeriod=30 Jan 28 15:25:36 crc kubenswrapper[4981]: I0128 15:25:36.433099 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 15:25:36 crc kubenswrapper[4981]: I0128 15:25:36.433780 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" containerName="nova-metadata-log" containerID="cri-o://f32a0b7221383c4d4771a0c3634cf9d71481007a46cbbecea546acfc4cb82b31" gracePeriod=30 Jan 28 15:25:36 crc kubenswrapper[4981]: I0128 15:25:36.433847 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" containerName="nova-metadata-metadata" containerID="cri-o://5649cb2f8dd03dae3217f1d3d27313ee2f50cb9a25e3c1daf85a7fabb692c250" gracePeriod=30 Jan 28 15:25:36 crc kubenswrapper[4981]: I0128 15:25:36.885568 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.009017 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32ed80a8-9456-457a-8f46-f51a17f5a7a3-logs\") pod \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.009380 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-config-data\") pod \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.009472 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52l8f\" (UniqueName: \"kubernetes.io/projected/32ed80a8-9456-457a-8f46-f51a17f5a7a3-kube-api-access-52l8f\") pod \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.009528 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-combined-ca-bundle\") pod \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.009565 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-public-tls-certs\") pod \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.009599 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-internal-tls-certs\") pod \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\" (UID: \"32ed80a8-9456-457a-8f46-f51a17f5a7a3\") " Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.011686 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32ed80a8-9456-457a-8f46-f51a17f5a7a3-logs" (OuterVolumeSpecName: "logs") pod "32ed80a8-9456-457a-8f46-f51a17f5a7a3" (UID: "32ed80a8-9456-457a-8f46-f51a17f5a7a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.023323 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32ed80a8-9456-457a-8f46-f51a17f5a7a3-kube-api-access-52l8f" (OuterVolumeSpecName: "kube-api-access-52l8f") pod "32ed80a8-9456-457a-8f46-f51a17f5a7a3" (UID: "32ed80a8-9456-457a-8f46-f51a17f5a7a3"). InnerVolumeSpecName "kube-api-access-52l8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.051358 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32ed80a8-9456-457a-8f46-f51a17f5a7a3" (UID: "32ed80a8-9456-457a-8f46-f51a17f5a7a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.051375 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-config-data" (OuterVolumeSpecName: "config-data") pod "32ed80a8-9456-457a-8f46-f51a17f5a7a3" (UID: "32ed80a8-9456-457a-8f46-f51a17f5a7a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.111560 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32ed80a8-9456-457a-8f46-f51a17f5a7a3-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.111599 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.111615 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52l8f\" (UniqueName: \"kubernetes.io/projected/32ed80a8-9456-457a-8f46-f51a17f5a7a3-kube-api-access-52l8f\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.111631 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.123530 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "32ed80a8-9456-457a-8f46-f51a17f5a7a3" (UID: "32ed80a8-9456-457a-8f46-f51a17f5a7a3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.129978 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "32ed80a8-9456-457a-8f46-f51a17f5a7a3" (UID: "32ed80a8-9456-457a-8f46-f51a17f5a7a3"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.167118 4981 generic.go:334] "Generic (PLEG): container finished" podID="32ed80a8-9456-457a-8f46-f51a17f5a7a3" containerID="fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd" exitCode=0 Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.167159 4981 generic.go:334] "Generic (PLEG): container finished" podID="32ed80a8-9456-457a-8f46-f51a17f5a7a3" containerID="d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e" exitCode=143 Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.167323 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.167383 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32ed80a8-9456-457a-8f46-f51a17f5a7a3","Type":"ContainerDied","Data":"fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd"} Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.167420 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32ed80a8-9456-457a-8f46-f51a17f5a7a3","Type":"ContainerDied","Data":"d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e"} Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.167435 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"32ed80a8-9456-457a-8f46-f51a17f5a7a3","Type":"ContainerDied","Data":"c93f3a5615b7be5d1c209a3201f582a738dfae9cdb34834509429d94ef039d23"} Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.167452 4981 scope.go:117] "RemoveContainer" containerID="fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.172428 4981 generic.go:334] "Generic (PLEG): container finished" podID="c99460aa-0162-4b6e-ad22-5f99def5df5e" containerID="962112e0cf7495f9ca2267b48bc4c1ba4abbbea396f47ad9597532c95522bda8" exitCode=0 Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.172632 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c99460aa-0162-4b6e-ad22-5f99def5df5e","Type":"ContainerDied","Data":"962112e0cf7495f9ca2267b48bc4c1ba4abbbea396f47ad9597532c95522bda8"} Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.174624 4981 generic.go:334] "Generic (PLEG): container finished" podID="dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" containerID="f32a0b7221383c4d4771a0c3634cf9d71481007a46cbbecea546acfc4cb82b31" exitCode=143 Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.174671 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec","Type":"ContainerDied","Data":"f32a0b7221383c4d4771a0c3634cf9d71481007a46cbbecea546acfc4cb82b31"} Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.213968 4981 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.214026 4981 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/32ed80a8-9456-457a-8f46-f51a17f5a7a3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.214282 4981 scope.go:117] "RemoveContainer" containerID="d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.224641 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.250150 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.261328 4981 scope.go:117] "RemoveContainer" containerID="fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd" Jan 28 15:25:37 crc kubenswrapper[4981]: E0128 15:25:37.261952 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd\": container with ID starting with fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd not found: ID does not exist" containerID="fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.261996 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd"} err="failed to get container status \"fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd\": rpc error: code = NotFound desc = could not find container \"fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd\": container with ID starting with fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd not found: ID does not exist" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.262055 4981 scope.go:117] "RemoveContainer" containerID="d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e" Jan 28 15:25:37 crc kubenswrapper[4981]: E0128 15:25:37.262447 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e\": container with ID starting with d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e not found: ID does not exist" containerID="d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.262467 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e"} err="failed to get container status \"d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e\": rpc error: code = NotFound desc = could not find container \"d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e\": container with ID starting with d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e not found: ID does not exist" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.262483 4981 scope.go:117] "RemoveContainer" containerID="fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.262696 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd"} err="failed to get container status \"fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd\": rpc error: code = NotFound desc = could not find container \"fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd\": container with ID starting with fd3992205bd1c62e62c28970f746295cb0c258bd5e0653b024bc5c4d377df3fd not found: ID does not exist" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.262714 4981 scope.go:117] "RemoveContainer" containerID="d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.262901 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e"} err="failed to get container status \"d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e\": rpc error: code = NotFound desc = could not find container \"d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e\": container with ID starting with d01d17aa20d72d2ff65fdd1afc006fa02f69f34154b59927369e20bb0e8d339e not found: ID does not exist" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.266193 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 15:25:37 crc kubenswrapper[4981]: E0128 15:25:37.266595 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ed80a8-9456-457a-8f46-f51a17f5a7a3" containerName="nova-api-api" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.266625 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ed80a8-9456-457a-8f46-f51a17f5a7a3" containerName="nova-api-api" Jan 28 15:25:37 crc kubenswrapper[4981]: E0128 15:25:37.266644 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="494b1558-721b-44b1-a712-7bb60eb3f5cc" containerName="nova-manage" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.266649 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="494b1558-721b-44b1-a712-7bb60eb3f5cc" containerName="nova-manage" Jan 28 15:25:37 crc kubenswrapper[4981]: E0128 15:25:37.266661 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c4ebb6-cd4b-4021-a139-49536ce42763" containerName="init" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.266667 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c4ebb6-cd4b-4021-a139-49536ce42763" containerName="init" Jan 28 15:25:37 crc kubenswrapper[4981]: E0128 15:25:37.266677 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27c4ebb6-cd4b-4021-a139-49536ce42763" containerName="dnsmasq-dns" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.266683 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="27c4ebb6-cd4b-4021-a139-49536ce42763" containerName="dnsmasq-dns" Jan 28 15:25:37 crc kubenswrapper[4981]: E0128 15:25:37.266691 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32ed80a8-9456-457a-8f46-f51a17f5a7a3" containerName="nova-api-log" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.266696 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="32ed80a8-9456-457a-8f46-f51a17f5a7a3" containerName="nova-api-log" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.266854 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ed80a8-9456-457a-8f46-f51a17f5a7a3" containerName="nova-api-api" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.266865 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="494b1558-721b-44b1-a712-7bb60eb3f5cc" containerName="nova-manage" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.266881 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="32ed80a8-9456-457a-8f46-f51a17f5a7a3" containerName="nova-api-log" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.266903 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="27c4ebb6-cd4b-4021-a139-49536ce42763" containerName="dnsmasq-dns" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.267830 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.269360 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.274854 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.274958 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.295933 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.333789 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32ed80a8-9456-457a-8f46-f51a17f5a7a3" path="/var/lib/kubelet/pods/32ed80a8-9456-457a-8f46-f51a17f5a7a3/volumes" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.381942 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.419797 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-logs\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.420113 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-config-data\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.420279 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.420608 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldjv4\" (UniqueName: \"kubernetes.io/projected/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-kube-api-access-ldjv4\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.420678 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-public-tls-certs\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.420813 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.521918 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99460aa-0162-4b6e-ad22-5f99def5df5e-combined-ca-bundle\") pod \"c99460aa-0162-4b6e-ad22-5f99def5df5e\" (UID: \"c99460aa-0162-4b6e-ad22-5f99def5df5e\") " Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.522025 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2nm6\" (UniqueName: \"kubernetes.io/projected/c99460aa-0162-4b6e-ad22-5f99def5df5e-kube-api-access-c2nm6\") pod \"c99460aa-0162-4b6e-ad22-5f99def5df5e\" (UID: \"c99460aa-0162-4b6e-ad22-5f99def5df5e\") " Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.522063 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99460aa-0162-4b6e-ad22-5f99def5df5e-config-data\") pod \"c99460aa-0162-4b6e-ad22-5f99def5df5e\" (UID: \"c99460aa-0162-4b6e-ad22-5f99def5df5e\") " Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.522479 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-logs\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.522559 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-config-data\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.522585 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.522641 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldjv4\" (UniqueName: \"kubernetes.io/projected/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-kube-api-access-ldjv4\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.522662 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-public-tls-certs\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.522691 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.523692 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-logs\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.527802 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.527911 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-config-data\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.528160 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-public-tls-certs\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.528350 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.528607 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99460aa-0162-4b6e-ad22-5f99def5df5e-kube-api-access-c2nm6" (OuterVolumeSpecName: "kube-api-access-c2nm6") pod "c99460aa-0162-4b6e-ad22-5f99def5df5e" (UID: "c99460aa-0162-4b6e-ad22-5f99def5df5e"). InnerVolumeSpecName "kube-api-access-c2nm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.541880 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldjv4\" (UniqueName: \"kubernetes.io/projected/c04c7269-f8ca-43e7-a204-d1ab4429f2f5-kube-api-access-ldjv4\") pod \"nova-api-0\" (UID: \"c04c7269-f8ca-43e7-a204-d1ab4429f2f5\") " pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.560120 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99460aa-0162-4b6e-ad22-5f99def5df5e-config-data" (OuterVolumeSpecName: "config-data") pod "c99460aa-0162-4b6e-ad22-5f99def5df5e" (UID: "c99460aa-0162-4b6e-ad22-5f99def5df5e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.577040 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99460aa-0162-4b6e-ad22-5f99def5df5e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c99460aa-0162-4b6e-ad22-5f99def5df5e" (UID: "c99460aa-0162-4b6e-ad22-5f99def5df5e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.594735 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.624555 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99460aa-0162-4b6e-ad22-5f99def5df5e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.624595 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2nm6\" (UniqueName: \"kubernetes.io/projected/c99460aa-0162-4b6e-ad22-5f99def5df5e-kube-api-access-c2nm6\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:37 crc kubenswrapper[4981]: I0128 15:25:37.624609 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99460aa-0162-4b6e-ad22-5f99def5df5e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.091660 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.187164 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c99460aa-0162-4b6e-ad22-5f99def5df5e","Type":"ContainerDied","Data":"6bd41f92b18e14eeabf432bdcec55055d8e1a11d2af76c26d7aa05f8b0f585ec"} Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.187234 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.187282 4981 scope.go:117] "RemoveContainer" containerID="962112e0cf7495f9ca2267b48bc4c1ba4abbbea396f47ad9597532c95522bda8" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.188669 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c04c7269-f8ca-43e7-a204-d1ab4429f2f5","Type":"ContainerStarted","Data":"5ed24cbc03a93ed5b466c86c2c96acf916cc1b8f1720df00b6beacb053597024"} Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.257017 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.274300 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.285436 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 15:25:38 crc kubenswrapper[4981]: E0128 15:25:38.285890 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99460aa-0162-4b6e-ad22-5f99def5df5e" containerName="nova-scheduler-scheduler" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.285917 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99460aa-0162-4b6e-ad22-5f99def5df5e" containerName="nova-scheduler-scheduler" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.296691 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99460aa-0162-4b6e-ad22-5f99def5df5e" containerName="nova-scheduler-scheduler" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.297796 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.306732 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.318372 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.441427 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtqbc\" (UniqueName: \"kubernetes.io/projected/8baced8e-6e29-4788-a841-d5c7d8a5e294-kube-api-access-mtqbc\") pod \"nova-scheduler-0\" (UID: \"8baced8e-6e29-4788-a841-d5c7d8a5e294\") " pod="openstack/nova-scheduler-0" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.441486 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8baced8e-6e29-4788-a841-d5c7d8a5e294-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8baced8e-6e29-4788-a841-d5c7d8a5e294\") " pod="openstack/nova-scheduler-0" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.441566 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8baced8e-6e29-4788-a841-d5c7d8a5e294-config-data\") pod \"nova-scheduler-0\" (UID: \"8baced8e-6e29-4788-a841-d5c7d8a5e294\") " pod="openstack/nova-scheduler-0" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.543563 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtqbc\" (UniqueName: \"kubernetes.io/projected/8baced8e-6e29-4788-a841-d5c7d8a5e294-kube-api-access-mtqbc\") pod \"nova-scheduler-0\" (UID: \"8baced8e-6e29-4788-a841-d5c7d8a5e294\") " pod="openstack/nova-scheduler-0" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.543618 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8baced8e-6e29-4788-a841-d5c7d8a5e294-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8baced8e-6e29-4788-a841-d5c7d8a5e294\") " pod="openstack/nova-scheduler-0" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.543692 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8baced8e-6e29-4788-a841-d5c7d8a5e294-config-data\") pod \"nova-scheduler-0\" (UID: \"8baced8e-6e29-4788-a841-d5c7d8a5e294\") " pod="openstack/nova-scheduler-0" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.547815 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8baced8e-6e29-4788-a841-d5c7d8a5e294-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8baced8e-6e29-4788-a841-d5c7d8a5e294\") " pod="openstack/nova-scheduler-0" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.551146 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8baced8e-6e29-4788-a841-d5c7d8a5e294-config-data\") pod \"nova-scheduler-0\" (UID: \"8baced8e-6e29-4788-a841-d5c7d8a5e294\") " pod="openstack/nova-scheduler-0" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.559048 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtqbc\" (UniqueName: \"kubernetes.io/projected/8baced8e-6e29-4788-a841-d5c7d8a5e294-kube-api-access-mtqbc\") pod \"nova-scheduler-0\" (UID: \"8baced8e-6e29-4788-a841-d5c7d8a5e294\") " pod="openstack/nova-scheduler-0" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.694707 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 15:25:38 crc kubenswrapper[4981]: I0128 15:25:38.998868 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 15:25:39 crc kubenswrapper[4981]: I0128 15:25:39.196775 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8baced8e-6e29-4788-a841-d5c7d8a5e294","Type":"ContainerStarted","Data":"3e4bf9fac358e5514b2e319a543d74cbfcb9b416d6ca59a5b8d9bc2b2e6fdf30"} Jan 28 15:25:39 crc kubenswrapper[4981]: I0128 15:25:39.198422 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c04c7269-f8ca-43e7-a204-d1ab4429f2f5","Type":"ContainerStarted","Data":"6ae3612705f92bc2712dc32363f3c152c48dd31d6644c0d02112e42aae5a1c92"} Jan 28 15:25:39 crc kubenswrapper[4981]: I0128 15:25:39.198443 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c04c7269-f8ca-43e7-a204-d1ab4429f2f5","Type":"ContainerStarted","Data":"52bdde0d28fef3dc22b49410639eda58142d7e977aea49bca9a6f94368f85442"} Jan 28 15:25:39 crc kubenswrapper[4981]: I0128 15:25:39.230333 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.230308961 podStartE2EDuration="2.230308961s" podCreationTimestamp="2026-01-28 15:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:39.220229769 +0000 UTC m=+1350.672388010" watchObservedRunningTime="2026-01-28 15:25:39.230308961 +0000 UTC m=+1350.682467202" Jan 28 15:25:39 crc kubenswrapper[4981]: I0128 15:25:39.332593 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c99460aa-0162-4b6e-ad22-5f99def5df5e" path="/var/lib/kubelet/pods/c99460aa-0162-4b6e-ad22-5f99def5df5e/volumes" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.107327 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.211626 4981 generic.go:334] "Generic (PLEG): container finished" podID="dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" containerID="5649cb2f8dd03dae3217f1d3d27313ee2f50cb9a25e3c1daf85a7fabb692c250" exitCode=0 Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.211698 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec","Type":"ContainerDied","Data":"5649cb2f8dd03dae3217f1d3d27313ee2f50cb9a25e3c1daf85a7fabb692c250"} Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.211726 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec","Type":"ContainerDied","Data":"ac44632da930201e91d1c3b3243d29e28fb925b29681b9e8b5f1cc20747d53a0"} Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.211742 4981 scope.go:117] "RemoveContainer" containerID="5649cb2f8dd03dae3217f1d3d27313ee2f50cb9a25e3c1daf85a7fabb692c250" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.211868 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.223324 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8baced8e-6e29-4788-a841-d5c7d8a5e294","Type":"ContainerStarted","Data":"2122b7b7a4a49f271549db4dcb69752249241c6f659b999683b10a1033c6700f"} Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.242474 4981 scope.go:117] "RemoveContainer" containerID="f32a0b7221383c4d4771a0c3634cf9d71481007a46cbbecea546acfc4cb82b31" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.243715 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.243696072 podStartE2EDuration="2.243696072s" podCreationTimestamp="2026-01-28 15:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:40.238742389 +0000 UTC m=+1351.690900630" watchObservedRunningTime="2026-01-28 15:25:40.243696072 +0000 UTC m=+1351.695854313" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.272498 4981 scope.go:117] "RemoveContainer" containerID="5649cb2f8dd03dae3217f1d3d27313ee2f50cb9a25e3c1daf85a7fabb692c250" Jan 28 15:25:40 crc kubenswrapper[4981]: E0128 15:25:40.273175 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5649cb2f8dd03dae3217f1d3d27313ee2f50cb9a25e3c1daf85a7fabb692c250\": container with ID starting with 5649cb2f8dd03dae3217f1d3d27313ee2f50cb9a25e3c1daf85a7fabb692c250 not found: ID does not exist" containerID="5649cb2f8dd03dae3217f1d3d27313ee2f50cb9a25e3c1daf85a7fabb692c250" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.273241 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5649cb2f8dd03dae3217f1d3d27313ee2f50cb9a25e3c1daf85a7fabb692c250"} err="failed to get container status \"5649cb2f8dd03dae3217f1d3d27313ee2f50cb9a25e3c1daf85a7fabb692c250\": rpc error: code = NotFound desc = could not find container \"5649cb2f8dd03dae3217f1d3d27313ee2f50cb9a25e3c1daf85a7fabb692c250\": container with ID starting with 5649cb2f8dd03dae3217f1d3d27313ee2f50cb9a25e3c1daf85a7fabb692c250 not found: ID does not exist" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.273266 4981 scope.go:117] "RemoveContainer" containerID="f32a0b7221383c4d4771a0c3634cf9d71481007a46cbbecea546acfc4cb82b31" Jan 28 15:25:40 crc kubenswrapper[4981]: E0128 15:25:40.273716 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f32a0b7221383c4d4771a0c3634cf9d71481007a46cbbecea546acfc4cb82b31\": container with ID starting with f32a0b7221383c4d4771a0c3634cf9d71481007a46cbbecea546acfc4cb82b31 not found: ID does not exist" containerID="f32a0b7221383c4d4771a0c3634cf9d71481007a46cbbecea546acfc4cb82b31" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.273747 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f32a0b7221383c4d4771a0c3634cf9d71481007a46cbbecea546acfc4cb82b31"} err="failed to get container status \"f32a0b7221383c4d4771a0c3634cf9d71481007a46cbbecea546acfc4cb82b31\": rpc error: code = NotFound desc = could not find container \"f32a0b7221383c4d4771a0c3634cf9d71481007a46cbbecea546acfc4cb82b31\": container with ID starting with f32a0b7221383c4d4771a0c3634cf9d71481007a46cbbecea546acfc4cb82b31 not found: ID does not exist" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.277720 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-combined-ca-bundle\") pod \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.277813 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-config-data\") pod \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.277944 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcpsv\" (UniqueName: \"kubernetes.io/projected/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-kube-api-access-pcpsv\") pod \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.278022 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-nova-metadata-tls-certs\") pod \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.278061 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-logs\") pod \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\" (UID: \"dc3fea1f-380b-4b3e-9d89-76a8ed8faaec\") " Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.279699 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-logs" (OuterVolumeSpecName: "logs") pod "dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" (UID: "dc3fea1f-380b-4b3e-9d89-76a8ed8faaec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.283093 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-kube-api-access-pcpsv" (OuterVolumeSpecName: "kube-api-access-pcpsv") pod "dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" (UID: "dc3fea1f-380b-4b3e-9d89-76a8ed8faaec"). InnerVolumeSpecName "kube-api-access-pcpsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.304666 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" (UID: "dc3fea1f-380b-4b3e-9d89-76a8ed8faaec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.321622 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-config-data" (OuterVolumeSpecName: "config-data") pod "dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" (UID: "dc3fea1f-380b-4b3e-9d89-76a8ed8faaec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.348008 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" (UID: "dc3fea1f-380b-4b3e-9d89-76a8ed8faaec"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.379895 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcpsv\" (UniqueName: \"kubernetes.io/projected/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-kube-api-access-pcpsv\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.380237 4981 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.380287 4981 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-logs\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.380299 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.380311 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.565107 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.575824 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.603151 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 15:25:40 crc kubenswrapper[4981]: E0128 15:25:40.603708 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" containerName="nova-metadata-metadata" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.603733 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" containerName="nova-metadata-metadata" Jan 28 15:25:40 crc kubenswrapper[4981]: E0128 15:25:40.603758 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" containerName="nova-metadata-log" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.603767 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" containerName="nova-metadata-log" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.604008 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" containerName="nova-metadata-log" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.604047 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" containerName="nova-metadata-metadata" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.605319 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.607287 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.607459 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.616737 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.788090 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c\") " pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.788217 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c-logs\") pod \"nova-metadata-0\" (UID: \"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c\") " pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.788318 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c\") " pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.788514 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c-config-data\") pod \"nova-metadata-0\" (UID: \"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c\") " pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.788562 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm2kw\" (UniqueName: \"kubernetes.io/projected/80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c-kube-api-access-lm2kw\") pod \"nova-metadata-0\" (UID: \"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c\") " pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.890029 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c-config-data\") pod \"nova-metadata-0\" (UID: \"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c\") " pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.890423 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm2kw\" (UniqueName: \"kubernetes.io/projected/80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c-kube-api-access-lm2kw\") pod \"nova-metadata-0\" (UID: \"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c\") " pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.890756 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c\") " pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.891450 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c-logs\") pod \"nova-metadata-0\" (UID: \"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c\") " pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.891613 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c\") " pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.891903 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c-logs\") pod \"nova-metadata-0\" (UID: \"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c\") " pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.898790 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c\") " pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.898969 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c-config-data\") pod \"nova-metadata-0\" (UID: \"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c\") " pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.918933 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm2kw\" (UniqueName: \"kubernetes.io/projected/80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c-kube-api-access-lm2kw\") pod \"nova-metadata-0\" (UID: \"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c\") " pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.925454 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c\") " pod="openstack/nova-metadata-0" Jan 28 15:25:40 crc kubenswrapper[4981]: I0128 15:25:40.929822 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 15:25:41 crc kubenswrapper[4981]: I0128 15:25:41.328104 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc3fea1f-380b-4b3e-9d89-76a8ed8faaec" path="/var/lib/kubelet/pods/dc3fea1f-380b-4b3e-9d89-76a8ed8faaec/volumes" Jan 28 15:25:41 crc kubenswrapper[4981]: I0128 15:25:41.456756 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 15:25:41 crc kubenswrapper[4981]: W0128 15:25:41.472596 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80f4d9f4_cc1c_4005_a9e5_f3251ff08c0c.slice/crio-780d7dbc91cebc22b48b6aaf4e6290d3f163c8351bf4dbe5e1fbb69398c63804 WatchSource:0}: Error finding container 780d7dbc91cebc22b48b6aaf4e6290d3f163c8351bf4dbe5e1fbb69398c63804: Status 404 returned error can't find the container with id 780d7dbc91cebc22b48b6aaf4e6290d3f163c8351bf4dbe5e1fbb69398c63804 Jan 28 15:25:42 crc kubenswrapper[4981]: I0128 15:25:42.250807 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c","Type":"ContainerStarted","Data":"146dd96e6acf1ffddfe1d09595859da282cfb8163bcc081d9cad3a4444605298"} Jan 28 15:25:42 crc kubenswrapper[4981]: I0128 15:25:42.251372 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c","Type":"ContainerStarted","Data":"95eab6174e4d9638684f6e42062dea8e4d50d6af6592c5b72cb26c6d78f18284"} Jan 28 15:25:42 crc kubenswrapper[4981]: I0128 15:25:42.251383 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c","Type":"ContainerStarted","Data":"780d7dbc91cebc22b48b6aaf4e6290d3f163c8351bf4dbe5e1fbb69398c63804"} Jan 28 15:25:42 crc kubenswrapper[4981]: I0128 15:25:42.268940 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.268918181 podStartE2EDuration="2.268918181s" podCreationTimestamp="2026-01-28 15:25:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:25:42.267767521 +0000 UTC m=+1353.719925772" watchObservedRunningTime="2026-01-28 15:25:42.268918181 +0000 UTC m=+1353.721076422" Jan 28 15:25:43 crc kubenswrapper[4981]: I0128 15:25:43.695619 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 15:25:45 crc kubenswrapper[4981]: I0128 15:25:45.930601 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 15:25:45 crc kubenswrapper[4981]: I0128 15:25:45.930876 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 15:25:47 crc kubenswrapper[4981]: I0128 15:25:47.595273 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 15:25:47 crc kubenswrapper[4981]: I0128 15:25:47.595357 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 15:25:48 crc kubenswrapper[4981]: I0128 15:25:48.610508 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c04c7269-f8ca-43e7-a204-d1ab4429f2f5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:48 crc kubenswrapper[4981]: I0128 15:25:48.610508 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c04c7269-f8ca-43e7-a204-d1ab4429f2f5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.203:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:48 crc kubenswrapper[4981]: I0128 15:25:48.695606 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 15:25:48 crc kubenswrapper[4981]: I0128 15:25:48.728923 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 15:25:49 crc kubenswrapper[4981]: I0128 15:25:49.359690 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 15:25:50 crc kubenswrapper[4981]: I0128 15:25:50.930590 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 15:25:50 crc kubenswrapper[4981]: I0128 15:25:50.931875 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 15:25:51 crc kubenswrapper[4981]: I0128 15:25:51.947313 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:51 crc kubenswrapper[4981]: I0128 15:25:51.947364 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.205:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 15:25:55 crc kubenswrapper[4981]: I0128 15:25:55.514804 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 15:25:57 crc kubenswrapper[4981]: I0128 15:25:57.658229 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 15:25:57 crc kubenswrapper[4981]: I0128 15:25:57.658994 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 15:25:57 crc kubenswrapper[4981]: I0128 15:25:57.735965 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 15:25:57 crc kubenswrapper[4981]: I0128 15:25:57.742237 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 15:25:58 crc kubenswrapper[4981]: I0128 15:25:58.417421 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 15:25:58 crc kubenswrapper[4981]: I0128 15:25:58.428345 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 15:26:00 crc kubenswrapper[4981]: I0128 15:26:00.938256 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 15:26:00 crc kubenswrapper[4981]: I0128 15:26:00.944446 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 15:26:00 crc kubenswrapper[4981]: I0128 15:26:00.947590 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 15:26:01 crc kubenswrapper[4981]: I0128 15:26:01.545850 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8mdsh"] Jan 28 15:26:01 crc kubenswrapper[4981]: I0128 15:26:01.547903 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:01 crc kubenswrapper[4981]: I0128 15:26:01.563992 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8mdsh"] Jan 28 15:26:01 crc kubenswrapper[4981]: I0128 15:26:01.601063 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 15:26:01 crc kubenswrapper[4981]: I0128 15:26:01.601811 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-utilities\") pod \"redhat-operators-8mdsh\" (UID: \"83b8af8d-76d2-4c5d-bb94-f9a4490796ca\") " pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:01 crc kubenswrapper[4981]: I0128 15:26:01.601923 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-catalog-content\") pod \"redhat-operators-8mdsh\" (UID: \"83b8af8d-76d2-4c5d-bb94-f9a4490796ca\") " pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:01 crc kubenswrapper[4981]: I0128 15:26:01.601946 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j8b8\" (UniqueName: \"kubernetes.io/projected/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-kube-api-access-7j8b8\") pod \"redhat-operators-8mdsh\" (UID: \"83b8af8d-76d2-4c5d-bb94-f9a4490796ca\") " pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:01 crc kubenswrapper[4981]: I0128 15:26:01.704310 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-utilities\") pod \"redhat-operators-8mdsh\" (UID: \"83b8af8d-76d2-4c5d-bb94-f9a4490796ca\") " pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:01 crc kubenswrapper[4981]: I0128 15:26:01.704632 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-catalog-content\") pod \"redhat-operators-8mdsh\" (UID: \"83b8af8d-76d2-4c5d-bb94-f9a4490796ca\") " pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:01 crc kubenswrapper[4981]: I0128 15:26:01.704655 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j8b8\" (UniqueName: \"kubernetes.io/projected/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-kube-api-access-7j8b8\") pod \"redhat-operators-8mdsh\" (UID: \"83b8af8d-76d2-4c5d-bb94-f9a4490796ca\") " pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:01 crc kubenswrapper[4981]: I0128 15:26:01.704854 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-utilities\") pod \"redhat-operators-8mdsh\" (UID: \"83b8af8d-76d2-4c5d-bb94-f9a4490796ca\") " pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:01 crc kubenswrapper[4981]: I0128 15:26:01.705084 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-catalog-content\") pod \"redhat-operators-8mdsh\" (UID: \"83b8af8d-76d2-4c5d-bb94-f9a4490796ca\") " pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:01 crc kubenswrapper[4981]: I0128 15:26:01.726829 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j8b8\" (UniqueName: \"kubernetes.io/projected/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-kube-api-access-7j8b8\") pod \"redhat-operators-8mdsh\" (UID: \"83b8af8d-76d2-4c5d-bb94-f9a4490796ca\") " pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:01 crc kubenswrapper[4981]: I0128 15:26:01.868657 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:02 crc kubenswrapper[4981]: I0128 15:26:02.364648 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8mdsh"] Jan 28 15:26:02 crc kubenswrapper[4981]: I0128 15:26:02.476917 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8mdsh" event={"ID":"83b8af8d-76d2-4c5d-bb94-f9a4490796ca","Type":"ContainerStarted","Data":"9f5ed8b40160034c911c4bc3392bd2204b7067f384b98ab9aa8a48e78e0f223e"} Jan 28 15:26:03 crc kubenswrapper[4981]: I0128 15:26:03.485637 4981 generic.go:334] "Generic (PLEG): container finished" podID="83b8af8d-76d2-4c5d-bb94-f9a4490796ca" containerID="5eb7cc1fe16b8306e55c429735b3408f68a805a353081d2958b7954b89ce7562" exitCode=0 Jan 28 15:26:03 crc kubenswrapper[4981]: I0128 15:26:03.485710 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8mdsh" event={"ID":"83b8af8d-76d2-4c5d-bb94-f9a4490796ca","Type":"ContainerDied","Data":"5eb7cc1fe16b8306e55c429735b3408f68a805a353081d2958b7954b89ce7562"} Jan 28 15:26:05 crc kubenswrapper[4981]: I0128 15:26:05.507091 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8mdsh" event={"ID":"83b8af8d-76d2-4c5d-bb94-f9a4490796ca","Type":"ContainerStarted","Data":"00f303440b1fda4de593586cd217c0e9d8fc1fcd069897df9ee066d45783a3eb"} Jan 28 15:26:07 crc kubenswrapper[4981]: I0128 15:26:07.572740 4981 generic.go:334] "Generic (PLEG): container finished" podID="83b8af8d-76d2-4c5d-bb94-f9a4490796ca" containerID="00f303440b1fda4de593586cd217c0e9d8fc1fcd069897df9ee066d45783a3eb" exitCode=0 Jan 28 15:26:07 crc kubenswrapper[4981]: I0128 15:26:07.572823 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8mdsh" event={"ID":"83b8af8d-76d2-4c5d-bb94-f9a4490796ca","Type":"ContainerDied","Data":"00f303440b1fda4de593586cd217c0e9d8fc1fcd069897df9ee066d45783a3eb"} Jan 28 15:26:08 crc kubenswrapper[4981]: I0128 15:26:08.590907 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8mdsh" event={"ID":"83b8af8d-76d2-4c5d-bb94-f9a4490796ca","Type":"ContainerStarted","Data":"5a19e8eca87c3b3a1590ff4d3b698253da73b9dbb503db3c26ebaff23c40ebd5"} Jan 28 15:26:08 crc kubenswrapper[4981]: I0128 15:26:08.622228 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8mdsh" podStartSLOduration=3.133426006 podStartE2EDuration="7.622172998s" podCreationTimestamp="2026-01-28 15:26:01 +0000 UTC" firstStartedPulling="2026-01-28 15:26:03.487625873 +0000 UTC m=+1374.939784114" lastFinishedPulling="2026-01-28 15:26:07.976372845 +0000 UTC m=+1379.428531106" observedRunningTime="2026-01-28 15:26:08.618954811 +0000 UTC m=+1380.071113062" watchObservedRunningTime="2026-01-28 15:26:08.622172998 +0000 UTC m=+1380.074331249" Jan 28 15:26:09 crc kubenswrapper[4981]: I0128 15:26:09.056204 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:26:10 crc kubenswrapper[4981]: I0128 15:26:10.728745 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:26:11 crc kubenswrapper[4981]: I0128 15:26:11.869071 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:11 crc kubenswrapper[4981]: I0128 15:26:11.869131 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:12 crc kubenswrapper[4981]: I0128 15:26:12.932484 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8mdsh" podUID="83b8af8d-76d2-4c5d-bb94-f9a4490796ca" containerName="registry-server" probeResult="failure" output=< Jan 28 15:26:12 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Jan 28 15:26:12 crc kubenswrapper[4981]: > Jan 28 15:26:13 crc kubenswrapper[4981]: I0128 15:26:13.975019 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="6456c27c-6d70-453b-a759-b6411aa67f51" containerName="rabbitmq" containerID="cri-o://af06c90b2e043d7627bd881a6b21cfdb96d65ceaa451a398a6eea4739e5ba22a" gracePeriod=604796 Jan 28 15:26:14 crc kubenswrapper[4981]: I0128 15:26:14.821481 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="5cccad1c-80c8-4806-a093-ecb1ad203f3c" containerName="rabbitmq" containerID="cri-o://6bf5c589eda06e1fde576e47b8606fa08955ff587665638e00849bbbfc2e3b6b" gracePeriod=604796 Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.407893 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4wpk8"] Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.409991 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.412450 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.424389 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4wpk8"] Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.465060 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.465130 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.465341 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.465402 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rfrk\" (UniqueName: \"kubernetes.io/projected/20edd6cf-e425-4544-9d68-0523586dd434-kube-api-access-5rfrk\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.465503 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-dns-svc\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.465714 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-config\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.465795 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.567463 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-dns-svc\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.567718 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-config\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.567774 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.568017 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.568097 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.568272 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.568290 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-dns-svc\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.568344 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rfrk\" (UniqueName: \"kubernetes.io/projected/20edd6cf-e425-4544-9d68-0523586dd434-kube-api-access-5rfrk\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.568762 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-config\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.568844 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-ovsdbserver-sb\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.569033 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-openstack-edpm-ipam\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.570007 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-ovsdbserver-nb\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.570106 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-dns-swift-storage-0\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.587786 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rfrk\" (UniqueName: \"kubernetes.io/projected/20edd6cf-e425-4544-9d68-0523586dd434-kube-api-access-5rfrk\") pod \"dnsmasq-dns-67b789f86c-4wpk8\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:19 crc kubenswrapper[4981]: I0128 15:26:19.730061 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.179697 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4wpk8"] Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.690216 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.701725 4981 generic.go:334] "Generic (PLEG): container finished" podID="20edd6cf-e425-4544-9d68-0523586dd434" containerID="c94d15df5d98036b72e2667ebc5c90013be36926cda74791f7448fc7002087c6" exitCode=0 Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.701811 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" event={"ID":"20edd6cf-e425-4544-9d68-0523586dd434","Type":"ContainerDied","Data":"c94d15df5d98036b72e2667ebc5c90013be36926cda74791f7448fc7002087c6"} Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.701843 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" event={"ID":"20edd6cf-e425-4544-9d68-0523586dd434","Type":"ContainerStarted","Data":"ff60f75fc82290ec3d1ee7c57378bf66e9ac9172f1e7033c99e3a6eed9f3ba26"} Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.704505 4981 generic.go:334] "Generic (PLEG): container finished" podID="6456c27c-6d70-453b-a759-b6411aa67f51" containerID="af06c90b2e043d7627bd881a6b21cfdb96d65ceaa451a398a6eea4739e5ba22a" exitCode=0 Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.704558 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6456c27c-6d70-453b-a759-b6411aa67f51","Type":"ContainerDied","Data":"af06c90b2e043d7627bd881a6b21cfdb96d65ceaa451a398a6eea4739e5ba22a"} Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.704624 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6456c27c-6d70-453b-a759-b6411aa67f51","Type":"ContainerDied","Data":"becab4a7d49c9d2b1f2d34e2e1190508103e2d5d863ea9b7f4569176e7700f4d"} Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.704649 4981 scope.go:117] "RemoveContainer" containerID="af06c90b2e043d7627bd881a6b21cfdb96d65ceaa451a398a6eea4739e5ba22a" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.704829 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.828826 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6456c27c-6d70-453b-a759-b6411aa67f51-erlang-cookie-secret\") pod \"6456c27c-6d70-453b-a759-b6411aa67f51\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.828880 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-plugins\") pod \"6456c27c-6d70-453b-a759-b6411aa67f51\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.828920 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9n7r\" (UniqueName: \"kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-kube-api-access-k9n7r\") pod \"6456c27c-6d70-453b-a759-b6411aa67f51\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.828950 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-tls\") pod \"6456c27c-6d70-453b-a759-b6411aa67f51\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.828983 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-plugins-conf\") pod \"6456c27c-6d70-453b-a759-b6411aa67f51\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.829112 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-erlang-cookie\") pod \"6456c27c-6d70-453b-a759-b6411aa67f51\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.829130 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-confd\") pod \"6456c27c-6d70-453b-a759-b6411aa67f51\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.829179 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6456c27c-6d70-453b-a759-b6411aa67f51-pod-info\") pod \"6456c27c-6d70-453b-a759-b6411aa67f51\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.829223 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-config-data\") pod \"6456c27c-6d70-453b-a759-b6411aa67f51\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.829243 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-server-conf\") pod \"6456c27c-6d70-453b-a759-b6411aa67f51\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.829284 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"6456c27c-6d70-453b-a759-b6411aa67f51\" (UID: \"6456c27c-6d70-453b-a759-b6411aa67f51\") " Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.829516 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6456c27c-6d70-453b-a759-b6411aa67f51" (UID: "6456c27c-6d70-453b-a759-b6411aa67f51"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.830255 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6456c27c-6d70-453b-a759-b6411aa67f51" (UID: "6456c27c-6d70-453b-a759-b6411aa67f51"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.830406 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6456c27c-6d70-453b-a759-b6411aa67f51" (UID: "6456c27c-6d70-453b-a759-b6411aa67f51"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.831879 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.831906 4981 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.831919 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.839959 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6456c27c-6d70-453b-a759-b6411aa67f51" (UID: "6456c27c-6d70-453b-a759-b6411aa67f51"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.842877 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6456c27c-6d70-453b-a759-b6411aa67f51-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6456c27c-6d70-453b-a759-b6411aa67f51" (UID: "6456c27c-6d70-453b-a759-b6411aa67f51"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.845331 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-kube-api-access-k9n7r" (OuterVolumeSpecName: "kube-api-access-k9n7r") pod "6456c27c-6d70-453b-a759-b6411aa67f51" (UID: "6456c27c-6d70-453b-a759-b6411aa67f51"). InnerVolumeSpecName "kube-api-access-k9n7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.852370 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6456c27c-6d70-453b-a759-b6411aa67f51-pod-info" (OuterVolumeSpecName: "pod-info") pod "6456c27c-6d70-453b-a759-b6411aa67f51" (UID: "6456c27c-6d70-453b-a759-b6411aa67f51"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.857521 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "6456c27c-6d70-453b-a759-b6411aa67f51" (UID: "6456c27c-6d70-453b-a759-b6411aa67f51"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.870602 4981 scope.go:117] "RemoveContainer" containerID="ae54a8260c30b63b6c7115a3e7a119595f296196630adf0f1e2c402962c61321" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.889949 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-config-data" (OuterVolumeSpecName: "config-data") pod "6456c27c-6d70-453b-a759-b6411aa67f51" (UID: "6456c27c-6d70-453b-a759-b6411aa67f51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.908549 4981 scope.go:117] "RemoveContainer" containerID="af06c90b2e043d7627bd881a6b21cfdb96d65ceaa451a398a6eea4739e5ba22a" Jan 28 15:26:20 crc kubenswrapper[4981]: E0128 15:26:20.911562 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af06c90b2e043d7627bd881a6b21cfdb96d65ceaa451a398a6eea4739e5ba22a\": container with ID starting with af06c90b2e043d7627bd881a6b21cfdb96d65ceaa451a398a6eea4739e5ba22a not found: ID does not exist" containerID="af06c90b2e043d7627bd881a6b21cfdb96d65ceaa451a398a6eea4739e5ba22a" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.911603 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af06c90b2e043d7627bd881a6b21cfdb96d65ceaa451a398a6eea4739e5ba22a"} err="failed to get container status \"af06c90b2e043d7627bd881a6b21cfdb96d65ceaa451a398a6eea4739e5ba22a\": rpc error: code = NotFound desc = could not find container \"af06c90b2e043d7627bd881a6b21cfdb96d65ceaa451a398a6eea4739e5ba22a\": container with ID starting with af06c90b2e043d7627bd881a6b21cfdb96d65ceaa451a398a6eea4739e5ba22a not found: ID does not exist" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.911625 4981 scope.go:117] "RemoveContainer" containerID="ae54a8260c30b63b6c7115a3e7a119595f296196630adf0f1e2c402962c61321" Jan 28 15:26:20 crc kubenswrapper[4981]: E0128 15:26:20.914812 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae54a8260c30b63b6c7115a3e7a119595f296196630adf0f1e2c402962c61321\": container with ID starting with ae54a8260c30b63b6c7115a3e7a119595f296196630adf0f1e2c402962c61321 not found: ID does not exist" containerID="ae54a8260c30b63b6c7115a3e7a119595f296196630adf0f1e2c402962c61321" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.914840 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae54a8260c30b63b6c7115a3e7a119595f296196630adf0f1e2c402962c61321"} err="failed to get container status \"ae54a8260c30b63b6c7115a3e7a119595f296196630adf0f1e2c402962c61321\": rpc error: code = NotFound desc = could not find container \"ae54a8260c30b63b6c7115a3e7a119595f296196630adf0f1e2c402962c61321\": container with ID starting with ae54a8260c30b63b6c7115a3e7a119595f296196630adf0f1e2c402962c61321 not found: ID does not exist" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.915348 4981 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="5cccad1c-80c8-4806-a093-ecb1ad203f3c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.933411 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.933452 4981 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6456c27c-6d70-453b-a759-b6411aa67f51-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.933467 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9n7r\" (UniqueName: \"kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-kube-api-access-k9n7r\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.933478 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.933489 4981 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6456c27c-6d70-453b-a759-b6411aa67f51-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.933500 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.956968 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-server-conf" (OuterVolumeSpecName: "server-conf") pod "6456c27c-6d70-453b-a759-b6411aa67f51" (UID: "6456c27c-6d70-453b-a759-b6411aa67f51"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:20 crc kubenswrapper[4981]: I0128 15:26:20.972319 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.016591 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6456c27c-6d70-453b-a759-b6411aa67f51" (UID: "6456c27c-6d70-453b-a759-b6411aa67f51"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.035352 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6456c27c-6d70-453b-a759-b6411aa67f51-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.035386 4981 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6456c27c-6d70-453b-a759-b6411aa67f51-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.035394 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.339785 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.350149 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.372656 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:26:21 crc kubenswrapper[4981]: E0128 15:26:21.373101 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6456c27c-6d70-453b-a759-b6411aa67f51" containerName="rabbitmq" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.373120 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6456c27c-6d70-453b-a759-b6411aa67f51" containerName="rabbitmq" Jan 28 15:26:21 crc kubenswrapper[4981]: E0128 15:26:21.373164 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6456c27c-6d70-453b-a759-b6411aa67f51" containerName="setup-container" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.373174 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6456c27c-6d70-453b-a759-b6411aa67f51" containerName="setup-container" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.373409 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="6456c27c-6d70-453b-a759-b6411aa67f51" containerName="rabbitmq" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.374702 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.378926 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.379229 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.379388 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.380067 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mlcst" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.380924 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.382070 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.383296 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.388670 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.544389 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ead14a8f-5759-4a7c-b8a4-6560131c28d1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.544423 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ead14a8f-5759-4a7c-b8a4-6560131c28d1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.544794 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ead14a8f-5759-4a7c-b8a4-6560131c28d1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.544915 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ead14a8f-5759-4a7c-b8a4-6560131c28d1-config-data\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.544997 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ead14a8f-5759-4a7c-b8a4-6560131c28d1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.545090 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ead14a8f-5759-4a7c-b8a4-6560131c28d1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.545275 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ead14a8f-5759-4a7c-b8a4-6560131c28d1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.545773 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpxbv\" (UniqueName: \"kubernetes.io/projected/ead14a8f-5759-4a7c-b8a4-6560131c28d1-kube-api-access-mpxbv\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.546094 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.546176 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ead14a8f-5759-4a7c-b8a4-6560131c28d1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.546325 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ead14a8f-5759-4a7c-b8a4-6560131c28d1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.648750 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ead14a8f-5759-4a7c-b8a4-6560131c28d1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.648806 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ead14a8f-5759-4a7c-b8a4-6560131c28d1-config-data\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.648847 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ead14a8f-5759-4a7c-b8a4-6560131c28d1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.648932 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ead14a8f-5759-4a7c-b8a4-6560131c28d1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.649044 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ead14a8f-5759-4a7c-b8a4-6560131c28d1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.649078 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpxbv\" (UniqueName: \"kubernetes.io/projected/ead14a8f-5759-4a7c-b8a4-6560131c28d1-kube-api-access-mpxbv\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.649224 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.649256 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ead14a8f-5759-4a7c-b8a4-6560131c28d1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.649613 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.649935 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ead14a8f-5759-4a7c-b8a4-6560131c28d1-config-data\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.650068 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ead14a8f-5759-4a7c-b8a4-6560131c28d1-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.650295 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ead14a8f-5759-4a7c-b8a4-6560131c28d1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.650403 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ead14a8f-5759-4a7c-b8a4-6560131c28d1-server-conf\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.650423 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ead14a8f-5759-4a7c-b8a4-6560131c28d1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.650477 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ead14a8f-5759-4a7c-b8a4-6560131c28d1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.651102 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ead14a8f-5759-4a7c-b8a4-6560131c28d1-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.651253 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ead14a8f-5759-4a7c-b8a4-6560131c28d1-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.655713 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ead14a8f-5759-4a7c-b8a4-6560131c28d1-pod-info\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.656407 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ead14a8f-5759-4a7c-b8a4-6560131c28d1-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.661824 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ead14a8f-5759-4a7c-b8a4-6560131c28d1-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.668216 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpxbv\" (UniqueName: \"kubernetes.io/projected/ead14a8f-5759-4a7c-b8a4-6560131c28d1-kube-api-access-mpxbv\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.671710 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ead14a8f-5759-4a7c-b8a4-6560131c28d1-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.727520 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" event={"ID":"20edd6cf-e425-4544-9d68-0523586dd434","Type":"ContainerStarted","Data":"2e0c96ce68b37ec50280d97bcff9945573b4995524c56e25550ce637c9b79985"} Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.731642 4981 generic.go:334] "Generic (PLEG): container finished" podID="5cccad1c-80c8-4806-a093-ecb1ad203f3c" containerID="6bf5c589eda06e1fde576e47b8606fa08955ff587665638e00849bbbfc2e3b6b" exitCode=0 Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.731686 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5cccad1c-80c8-4806-a093-ecb1ad203f3c","Type":"ContainerDied","Data":"6bf5c589eda06e1fde576e47b8606fa08955ff587665638e00849bbbfc2e3b6b"} Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.752467 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" podStartSLOduration=2.75244905 podStartE2EDuration="2.75244905s" podCreationTimestamp="2026-01-28 15:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:26:21.747872937 +0000 UTC m=+1393.200031178" watchObservedRunningTime="2026-01-28 15:26:21.75244905 +0000 UTC m=+1393.204607281" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.758325 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"ead14a8f-5759-4a7c-b8a4-6560131c28d1\") " pod="openstack/rabbitmq-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.847684 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.949462 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.955787 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-plugins\") pod \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.955833 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frkdc\" (UniqueName: \"kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-kube-api-access-frkdc\") pod \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.955917 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.955979 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-confd\") pod \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.955996 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cccad1c-80c8-4806-a093-ecb1ad203f3c-pod-info\") pod \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.956052 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cccad1c-80c8-4806-a093-ecb1ad203f3c-erlang-cookie-secret\") pod \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.956075 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-tls\") pod \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.956095 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-server-conf\") pod \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.956143 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-config-data\") pod \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.956162 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-erlang-cookie\") pod \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.956236 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-plugins-conf\") pod \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\" (UID: \"5cccad1c-80c8-4806-a093-ecb1ad203f3c\") " Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.956267 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "5cccad1c-80c8-4806-a093-ecb1ad203f3c" (UID: "5cccad1c-80c8-4806-a093-ecb1ad203f3c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.957208 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.957635 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "5cccad1c-80c8-4806-a093-ecb1ad203f3c" (UID: "5cccad1c-80c8-4806-a093-ecb1ad203f3c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.958731 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "5cccad1c-80c8-4806-a093-ecb1ad203f3c" (UID: "5cccad1c-80c8-4806-a093-ecb1ad203f3c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.962790 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-kube-api-access-frkdc" (OuterVolumeSpecName: "kube-api-access-frkdc") pod "5cccad1c-80c8-4806-a093-ecb1ad203f3c" (UID: "5cccad1c-80c8-4806-a093-ecb1ad203f3c"). InnerVolumeSpecName "kube-api-access-frkdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.966941 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "5cccad1c-80c8-4806-a093-ecb1ad203f3c" (UID: "5cccad1c-80c8-4806-a093-ecb1ad203f3c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.968936 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "5cccad1c-80c8-4806-a093-ecb1ad203f3c" (UID: "5cccad1c-80c8-4806-a093-ecb1ad203f3c"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.976569 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5cccad1c-80c8-4806-a093-ecb1ad203f3c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "5cccad1c-80c8-4806-a093-ecb1ad203f3c" (UID: "5cccad1c-80c8-4806-a093-ecb1ad203f3c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.981061 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/5cccad1c-80c8-4806-a093-ecb1ad203f3c-pod-info" (OuterVolumeSpecName: "pod-info") pod "5cccad1c-80c8-4806-a093-ecb1ad203f3c" (UID: "5cccad1c-80c8-4806-a093-ecb1ad203f3c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.988078 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-config-data" (OuterVolumeSpecName: "config-data") pod "5cccad1c-80c8-4806-a093-ecb1ad203f3c" (UID: "5cccad1c-80c8-4806-a093-ecb1ad203f3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:21 crc kubenswrapper[4981]: I0128 15:26:21.994957 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.021520 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.057264 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-server-conf" (OuterVolumeSpecName: "server-conf") pod "5cccad1c-80c8-4806-a093-ecb1ad203f3c" (UID: "5cccad1c-80c8-4806-a093-ecb1ad203f3c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.058636 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.058671 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.058686 4981 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.058698 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frkdc\" (UniqueName: \"kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-kube-api-access-frkdc\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.058721 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.058732 4981 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/5cccad1c-80c8-4806-a093-ecb1ad203f3c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.058742 4981 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/5cccad1c-80c8-4806-a093-ecb1ad203f3c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.058752 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.058763 4981 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/5cccad1c-80c8-4806-a093-ecb1ad203f3c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.085470 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.098924 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "5cccad1c-80c8-4806-a093-ecb1ad203f3c" (UID: "5cccad1c-80c8-4806-a093-ecb1ad203f3c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.160761 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.160784 4981 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/5cccad1c-80c8-4806-a093-ecb1ad203f3c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.210493 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8mdsh"] Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.458772 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 15:26:22 crc kubenswrapper[4981]: W0128 15:26:22.470400 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podead14a8f_5759_4a7c_b8a4_6560131c28d1.slice/crio-1ace5a62c010e360fc3ee845e1746ebd2a53b47899b1ac7208914518f0575cc7 WatchSource:0}: Error finding container 1ace5a62c010e360fc3ee845e1746ebd2a53b47899b1ac7208914518f0575cc7: Status 404 returned error can't find the container with id 1ace5a62c010e360fc3ee845e1746ebd2a53b47899b1ac7208914518f0575cc7 Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.752384 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"5cccad1c-80c8-4806-a093-ecb1ad203f3c","Type":"ContainerDied","Data":"17af73a516e39e9c7e3d5b8699165ab84c8251b422eca70c14f174d597350be8"} Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.752954 4981 scope.go:117] "RemoveContainer" containerID="6bf5c589eda06e1fde576e47b8606fa08955ff587665638e00849bbbfc2e3b6b" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.753458 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.758883 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ead14a8f-5759-4a7c-b8a4-6560131c28d1","Type":"ContainerStarted","Data":"1ace5a62c010e360fc3ee845e1746ebd2a53b47899b1ac7208914518f0575cc7"} Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.758911 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.778302 4981 scope.go:117] "RemoveContainer" containerID="ed3fa028e256ef52d67123bf375679a669443697914c1d8322591cd65286f694" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.798162 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.811987 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.840218 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:26:22 crc kubenswrapper[4981]: E0128 15:26:22.840665 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cccad1c-80c8-4806-a093-ecb1ad203f3c" containerName="setup-container" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.840683 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cccad1c-80c8-4806-a093-ecb1ad203f3c" containerName="setup-container" Jan 28 15:26:22 crc kubenswrapper[4981]: E0128 15:26:22.840709 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cccad1c-80c8-4806-a093-ecb1ad203f3c" containerName="rabbitmq" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.840716 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cccad1c-80c8-4806-a093-ecb1ad203f3c" containerName="rabbitmq" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.840993 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cccad1c-80c8-4806-a093-ecb1ad203f3c" containerName="rabbitmq" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.843096 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.850868 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.851003 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.851091 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.851220 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-w9wlh" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.851341 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.851391 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.856466 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.872796 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:26:22 crc kubenswrapper[4981]: E0128 15:26:22.913705 4981 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cccad1c_80c8_4806_a093_ecb1ad203f3c.slice\": RecentStats: unable to find data in memory cache]" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.973635 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.974006 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.974041 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.974080 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.974116 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.974280 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.974353 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64vx2\" (UniqueName: \"kubernetes.io/projected/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-kube-api-access-64vx2\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.974426 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.974525 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.974541 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:22 crc kubenswrapper[4981]: I0128 15:26:22.974786 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.076779 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.076837 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.076959 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.077025 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.077061 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.077101 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.077153 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.077232 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.077326 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.077369 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64vx2\" (UniqueName: \"kubernetes.io/projected/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-kube-api-access-64vx2\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.077424 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.077673 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.078086 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.079107 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.079130 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.079902 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.080140 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.082165 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.083977 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.090490 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.095436 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.099154 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64vx2\" (UniqueName: \"kubernetes.io/projected/8c327a08-ce8c-42f7-b305-cfc8b7f2d644-kube-api-access-64vx2\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.120563 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"8c327a08-ce8c-42f7-b305-cfc8b7f2d644\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.166870 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.327947 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5cccad1c-80c8-4806-a093-ecb1ad203f3c" path="/var/lib/kubelet/pods/5cccad1c-80c8-4806-a093-ecb1ad203f3c/volumes" Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.328700 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6456c27c-6d70-453b-a759-b6411aa67f51" path="/var/lib/kubelet/pods/6456c27c-6d70-453b-a759-b6411aa67f51/volumes" Jan 28 15:26:23 crc kubenswrapper[4981]: W0128 15:26:23.655500 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c327a08_ce8c_42f7_b305_cfc8b7f2d644.slice/crio-3c70f21ddacdd7c083224f84570b8832e3a4922ab08bab0ffdda07d9d745f87a WatchSource:0}: Error finding container 3c70f21ddacdd7c083224f84570b8832e3a4922ab08bab0ffdda07d9d745f87a: Status 404 returned error can't find the container with id 3c70f21ddacdd7c083224f84570b8832e3a4922ab08bab0ffdda07d9d745f87a Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.659591 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.774556 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c327a08-ce8c-42f7-b305-cfc8b7f2d644","Type":"ContainerStarted","Data":"3c70f21ddacdd7c083224f84570b8832e3a4922ab08bab0ffdda07d9d745f87a"} Jan 28 15:26:23 crc kubenswrapper[4981]: I0128 15:26:23.776291 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8mdsh" podUID="83b8af8d-76d2-4c5d-bb94-f9a4490796ca" containerName="registry-server" containerID="cri-o://5a19e8eca87c3b3a1590ff4d3b698253da73b9dbb503db3c26ebaff23c40ebd5" gracePeriod=2 Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.154871 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.316254 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-utilities\") pod \"83b8af8d-76d2-4c5d-bb94-f9a4490796ca\" (UID: \"83b8af8d-76d2-4c5d-bb94-f9a4490796ca\") " Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.316313 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-catalog-content\") pod \"83b8af8d-76d2-4c5d-bb94-f9a4490796ca\" (UID: \"83b8af8d-76d2-4c5d-bb94-f9a4490796ca\") " Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.316450 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j8b8\" (UniqueName: \"kubernetes.io/projected/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-kube-api-access-7j8b8\") pod \"83b8af8d-76d2-4c5d-bb94-f9a4490796ca\" (UID: \"83b8af8d-76d2-4c5d-bb94-f9a4490796ca\") " Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.318364 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-utilities" (OuterVolumeSpecName: "utilities") pod "83b8af8d-76d2-4c5d-bb94-f9a4490796ca" (UID: "83b8af8d-76d2-4c5d-bb94-f9a4490796ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.324775 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-kube-api-access-7j8b8" (OuterVolumeSpecName: "kube-api-access-7j8b8") pod "83b8af8d-76d2-4c5d-bb94-f9a4490796ca" (UID: "83b8af8d-76d2-4c5d-bb94-f9a4490796ca"). InnerVolumeSpecName "kube-api-access-7j8b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.419629 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j8b8\" (UniqueName: \"kubernetes.io/projected/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-kube-api-access-7j8b8\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.419682 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.502209 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83b8af8d-76d2-4c5d-bb94-f9a4490796ca" (UID: "83b8af8d-76d2-4c5d-bb94-f9a4490796ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.521924 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83b8af8d-76d2-4c5d-bb94-f9a4490796ca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.787059 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ead14a8f-5759-4a7c-b8a4-6560131c28d1","Type":"ContainerStarted","Data":"afd51ee20ac9edd19f29689c40ff8fd3e7e0696a7cb233beb9ca374843bb5029"} Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.794423 4981 generic.go:334] "Generic (PLEG): container finished" podID="83b8af8d-76d2-4c5d-bb94-f9a4490796ca" containerID="5a19e8eca87c3b3a1590ff4d3b698253da73b9dbb503db3c26ebaff23c40ebd5" exitCode=0 Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.794861 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8mdsh" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.794902 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8mdsh" event={"ID":"83b8af8d-76d2-4c5d-bb94-f9a4490796ca","Type":"ContainerDied","Data":"5a19e8eca87c3b3a1590ff4d3b698253da73b9dbb503db3c26ebaff23c40ebd5"} Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.795085 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8mdsh" event={"ID":"83b8af8d-76d2-4c5d-bb94-f9a4490796ca","Type":"ContainerDied","Data":"9f5ed8b40160034c911c4bc3392bd2204b7067f384b98ab9aa8a48e78e0f223e"} Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.795265 4981 scope.go:117] "RemoveContainer" containerID="5a19e8eca87c3b3a1590ff4d3b698253da73b9dbb503db3c26ebaff23c40ebd5" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.839771 4981 scope.go:117] "RemoveContainer" containerID="00f303440b1fda4de593586cd217c0e9d8fc1fcd069897df9ee066d45783a3eb" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.859028 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8mdsh"] Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.867861 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8mdsh"] Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.869786 4981 scope.go:117] "RemoveContainer" containerID="5eb7cc1fe16b8306e55c429735b3408f68a805a353081d2958b7954b89ce7562" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.901832 4981 scope.go:117] "RemoveContainer" containerID="5a19e8eca87c3b3a1590ff4d3b698253da73b9dbb503db3c26ebaff23c40ebd5" Jan 28 15:26:24 crc kubenswrapper[4981]: E0128 15:26:24.902338 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a19e8eca87c3b3a1590ff4d3b698253da73b9dbb503db3c26ebaff23c40ebd5\": container with ID starting with 5a19e8eca87c3b3a1590ff4d3b698253da73b9dbb503db3c26ebaff23c40ebd5 not found: ID does not exist" containerID="5a19e8eca87c3b3a1590ff4d3b698253da73b9dbb503db3c26ebaff23c40ebd5" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.902386 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a19e8eca87c3b3a1590ff4d3b698253da73b9dbb503db3c26ebaff23c40ebd5"} err="failed to get container status \"5a19e8eca87c3b3a1590ff4d3b698253da73b9dbb503db3c26ebaff23c40ebd5\": rpc error: code = NotFound desc = could not find container \"5a19e8eca87c3b3a1590ff4d3b698253da73b9dbb503db3c26ebaff23c40ebd5\": container with ID starting with 5a19e8eca87c3b3a1590ff4d3b698253da73b9dbb503db3c26ebaff23c40ebd5 not found: ID does not exist" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.902415 4981 scope.go:117] "RemoveContainer" containerID="00f303440b1fda4de593586cd217c0e9d8fc1fcd069897df9ee066d45783a3eb" Jan 28 15:26:24 crc kubenswrapper[4981]: E0128 15:26:24.902892 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00f303440b1fda4de593586cd217c0e9d8fc1fcd069897df9ee066d45783a3eb\": container with ID starting with 00f303440b1fda4de593586cd217c0e9d8fc1fcd069897df9ee066d45783a3eb not found: ID does not exist" containerID="00f303440b1fda4de593586cd217c0e9d8fc1fcd069897df9ee066d45783a3eb" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.902943 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00f303440b1fda4de593586cd217c0e9d8fc1fcd069897df9ee066d45783a3eb"} err="failed to get container status \"00f303440b1fda4de593586cd217c0e9d8fc1fcd069897df9ee066d45783a3eb\": rpc error: code = NotFound desc = could not find container \"00f303440b1fda4de593586cd217c0e9d8fc1fcd069897df9ee066d45783a3eb\": container with ID starting with 00f303440b1fda4de593586cd217c0e9d8fc1fcd069897df9ee066d45783a3eb not found: ID does not exist" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.902969 4981 scope.go:117] "RemoveContainer" containerID="5eb7cc1fe16b8306e55c429735b3408f68a805a353081d2958b7954b89ce7562" Jan 28 15:26:24 crc kubenswrapper[4981]: E0128 15:26:24.903265 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eb7cc1fe16b8306e55c429735b3408f68a805a353081d2958b7954b89ce7562\": container with ID starting with 5eb7cc1fe16b8306e55c429735b3408f68a805a353081d2958b7954b89ce7562 not found: ID does not exist" containerID="5eb7cc1fe16b8306e55c429735b3408f68a805a353081d2958b7954b89ce7562" Jan 28 15:26:24 crc kubenswrapper[4981]: I0128 15:26:24.903293 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eb7cc1fe16b8306e55c429735b3408f68a805a353081d2958b7954b89ce7562"} err="failed to get container status \"5eb7cc1fe16b8306e55c429735b3408f68a805a353081d2958b7954b89ce7562\": rpc error: code = NotFound desc = could not find container \"5eb7cc1fe16b8306e55c429735b3408f68a805a353081d2958b7954b89ce7562\": container with ID starting with 5eb7cc1fe16b8306e55c429735b3408f68a805a353081d2958b7954b89ce7562 not found: ID does not exist" Jan 28 15:26:25 crc kubenswrapper[4981]: I0128 15:26:25.337144 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83b8af8d-76d2-4c5d-bb94-f9a4490796ca" path="/var/lib/kubelet/pods/83b8af8d-76d2-4c5d-bb94-f9a4490796ca/volumes" Jan 28 15:26:27 crc kubenswrapper[4981]: I0128 15:26:27.839348 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c327a08-ce8c-42f7-b305-cfc8b7f2d644","Type":"ContainerStarted","Data":"c4006fc0e5bbb20178f04e055b9e90a949f0b6d1b60add104f68df6946608ebd"} Jan 28 15:26:29 crc kubenswrapper[4981]: I0128 15:26:29.732356 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:29 crc kubenswrapper[4981]: I0128 15:26:29.806139 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-b4nl5"] Jan 28 15:26:29 crc kubenswrapper[4981]: I0128 15:26:29.806462 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" podUID="46b93ba2-9bbb-4754-8aee-7c588bc645de" containerName="dnsmasq-dns" containerID="cri-o://11f0ace9333df29fd82dfbe7a5b5d48e8b15826e3a2403f07bcdfb865d5bc738" gracePeriod=10 Jan 28 15:26:29 crc kubenswrapper[4981]: I0128 15:26:29.949027 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-t7szx"] Jan 28 15:26:29 crc kubenswrapper[4981]: E0128 15:26:29.949545 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b8af8d-76d2-4c5d-bb94-f9a4490796ca" containerName="registry-server" Jan 28 15:26:29 crc kubenswrapper[4981]: I0128 15:26:29.949569 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b8af8d-76d2-4c5d-bb94-f9a4490796ca" containerName="registry-server" Jan 28 15:26:29 crc kubenswrapper[4981]: E0128 15:26:29.949584 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b8af8d-76d2-4c5d-bb94-f9a4490796ca" containerName="extract-content" Jan 28 15:26:29 crc kubenswrapper[4981]: I0128 15:26:29.949592 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b8af8d-76d2-4c5d-bb94-f9a4490796ca" containerName="extract-content" Jan 28 15:26:29 crc kubenswrapper[4981]: E0128 15:26:29.949606 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83b8af8d-76d2-4c5d-bb94-f9a4490796ca" containerName="extract-utilities" Jan 28 15:26:29 crc kubenswrapper[4981]: I0128 15:26:29.949615 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="83b8af8d-76d2-4c5d-bb94-f9a4490796ca" containerName="extract-utilities" Jan 28 15:26:29 crc kubenswrapper[4981]: I0128 15:26:29.949839 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="83b8af8d-76d2-4c5d-bb94-f9a4490796ca" containerName="registry-server" Jan 28 15:26:29 crc kubenswrapper[4981]: I0128 15:26:29.951088 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:29 crc kubenswrapper[4981]: I0128 15:26:29.971649 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-t7szx"] Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.046821 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.046891 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs8dt\" (UniqueName: \"kubernetes.io/projected/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-kube-api-access-fs8dt\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.046926 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.046955 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.046983 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.047038 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-config\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.047056 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.159511 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.160019 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs8dt\" (UniqueName: \"kubernetes.io/projected/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-kube-api-access-fs8dt\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.160064 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.160111 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.160161 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.160246 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-config\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.160274 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.160609 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-dns-swift-storage-0\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.161072 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-ovsdbserver-nb\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.161293 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-ovsdbserver-sb\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.161489 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-openstack-edpm-ipam\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.161494 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-config\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.161631 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-dns-svc\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.192220 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs8dt\" (UniqueName: \"kubernetes.io/projected/83f911a5-2f1f-4cc2-a2cb-74c94632dd94-kube-api-access-fs8dt\") pod \"dnsmasq-dns-cb6ffcf87-t7szx\" (UID: \"83f911a5-2f1f-4cc2-a2cb-74c94632dd94\") " pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.283231 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.295055 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.364158 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxlqm\" (UniqueName: \"kubernetes.io/projected/46b93ba2-9bbb-4754-8aee-7c588bc645de-kube-api-access-pxlqm\") pod \"46b93ba2-9bbb-4754-8aee-7c588bc645de\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.364281 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-config\") pod \"46b93ba2-9bbb-4754-8aee-7c588bc645de\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.364310 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-dns-swift-storage-0\") pod \"46b93ba2-9bbb-4754-8aee-7c588bc645de\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.364336 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-dns-svc\") pod \"46b93ba2-9bbb-4754-8aee-7c588bc645de\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.364455 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-ovsdbserver-nb\") pod \"46b93ba2-9bbb-4754-8aee-7c588bc645de\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.364482 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-ovsdbserver-sb\") pod \"46b93ba2-9bbb-4754-8aee-7c588bc645de\" (UID: \"46b93ba2-9bbb-4754-8aee-7c588bc645de\") " Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.369649 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46b93ba2-9bbb-4754-8aee-7c588bc645de-kube-api-access-pxlqm" (OuterVolumeSpecName: "kube-api-access-pxlqm") pod "46b93ba2-9bbb-4754-8aee-7c588bc645de" (UID: "46b93ba2-9bbb-4754-8aee-7c588bc645de"). InnerVolumeSpecName "kube-api-access-pxlqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.414002 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "46b93ba2-9bbb-4754-8aee-7c588bc645de" (UID: "46b93ba2-9bbb-4754-8aee-7c588bc645de"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.428323 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46b93ba2-9bbb-4754-8aee-7c588bc645de" (UID: "46b93ba2-9bbb-4754-8aee-7c588bc645de"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.429607 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "46b93ba2-9bbb-4754-8aee-7c588bc645de" (UID: "46b93ba2-9bbb-4754-8aee-7c588bc645de"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.443641 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "46b93ba2-9bbb-4754-8aee-7c588bc645de" (UID: "46b93ba2-9bbb-4754-8aee-7c588bc645de"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.444566 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-config" (OuterVolumeSpecName: "config") pod "46b93ba2-9bbb-4754-8aee-7c588bc645de" (UID: "46b93ba2-9bbb-4754-8aee-7c588bc645de"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.466523 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxlqm\" (UniqueName: \"kubernetes.io/projected/46b93ba2-9bbb-4754-8aee-7c588bc645de-kube-api-access-pxlqm\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.466553 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.466562 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.466570 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.466580 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.466587 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46b93ba2-9bbb-4754-8aee-7c588bc645de-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.797401 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb6ffcf87-t7szx"] Jan 28 15:26:30 crc kubenswrapper[4981]: W0128 15:26:30.798457 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83f911a5_2f1f_4cc2_a2cb_74c94632dd94.slice/crio-84d0c70c636b59681d45842ec2cf8ccc155852cb244620b8c93d8839624d785b WatchSource:0}: Error finding container 84d0c70c636b59681d45842ec2cf8ccc155852cb244620b8c93d8839624d785b: Status 404 returned error can't find the container with id 84d0c70c636b59681d45842ec2cf8ccc155852cb244620b8c93d8839624d785b Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.874465 4981 generic.go:334] "Generic (PLEG): container finished" podID="46b93ba2-9bbb-4754-8aee-7c588bc645de" containerID="11f0ace9333df29fd82dfbe7a5b5d48e8b15826e3a2403f07bcdfb865d5bc738" exitCode=0 Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.874500 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.874515 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" event={"ID":"46b93ba2-9bbb-4754-8aee-7c588bc645de","Type":"ContainerDied","Data":"11f0ace9333df29fd82dfbe7a5b5d48e8b15826e3a2403f07bcdfb865d5bc738"} Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.876037 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59cf4bdb65-b4nl5" event={"ID":"46b93ba2-9bbb-4754-8aee-7c588bc645de","Type":"ContainerDied","Data":"66bc54a211c5dbd15f5dce7ba9cc7b75d4b4d2eda68e13f535c26106862a55dd"} Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.876069 4981 scope.go:117] "RemoveContainer" containerID="11f0ace9333df29fd82dfbe7a5b5d48e8b15826e3a2403f07bcdfb865d5bc738" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.877003 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" event={"ID":"83f911a5-2f1f-4cc2-a2cb-74c94632dd94","Type":"ContainerStarted","Data":"84d0c70c636b59681d45842ec2cf8ccc155852cb244620b8c93d8839624d785b"} Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.903374 4981 scope.go:117] "RemoveContainer" containerID="12b2ccac9229ad2a3652827c95682a8360888aa87c768c67f5e883c657ae937f" Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.941804 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-b4nl5"] Jan 28 15:26:30 crc kubenswrapper[4981]: I0128 15:26:30.951481 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59cf4bdb65-b4nl5"] Jan 28 15:26:31 crc kubenswrapper[4981]: I0128 15:26:31.013736 4981 scope.go:117] "RemoveContainer" containerID="11f0ace9333df29fd82dfbe7a5b5d48e8b15826e3a2403f07bcdfb865d5bc738" Jan 28 15:26:31 crc kubenswrapper[4981]: E0128 15:26:31.014460 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11f0ace9333df29fd82dfbe7a5b5d48e8b15826e3a2403f07bcdfb865d5bc738\": container with ID starting with 11f0ace9333df29fd82dfbe7a5b5d48e8b15826e3a2403f07bcdfb865d5bc738 not found: ID does not exist" containerID="11f0ace9333df29fd82dfbe7a5b5d48e8b15826e3a2403f07bcdfb865d5bc738" Jan 28 15:26:31 crc kubenswrapper[4981]: I0128 15:26:31.014509 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11f0ace9333df29fd82dfbe7a5b5d48e8b15826e3a2403f07bcdfb865d5bc738"} err="failed to get container status \"11f0ace9333df29fd82dfbe7a5b5d48e8b15826e3a2403f07bcdfb865d5bc738\": rpc error: code = NotFound desc = could not find container \"11f0ace9333df29fd82dfbe7a5b5d48e8b15826e3a2403f07bcdfb865d5bc738\": container with ID starting with 11f0ace9333df29fd82dfbe7a5b5d48e8b15826e3a2403f07bcdfb865d5bc738 not found: ID does not exist" Jan 28 15:26:31 crc kubenswrapper[4981]: I0128 15:26:31.014539 4981 scope.go:117] "RemoveContainer" containerID="12b2ccac9229ad2a3652827c95682a8360888aa87c768c67f5e883c657ae937f" Jan 28 15:26:31 crc kubenswrapper[4981]: E0128 15:26:31.015295 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12b2ccac9229ad2a3652827c95682a8360888aa87c768c67f5e883c657ae937f\": container with ID starting with 12b2ccac9229ad2a3652827c95682a8360888aa87c768c67f5e883c657ae937f not found: ID does not exist" containerID="12b2ccac9229ad2a3652827c95682a8360888aa87c768c67f5e883c657ae937f" Jan 28 15:26:31 crc kubenswrapper[4981]: I0128 15:26:31.015331 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12b2ccac9229ad2a3652827c95682a8360888aa87c768c67f5e883c657ae937f"} err="failed to get container status \"12b2ccac9229ad2a3652827c95682a8360888aa87c768c67f5e883c657ae937f\": rpc error: code = NotFound desc = could not find container \"12b2ccac9229ad2a3652827c95682a8360888aa87c768c67f5e883c657ae937f\": container with ID starting with 12b2ccac9229ad2a3652827c95682a8360888aa87c768c67f5e883c657ae937f not found: ID does not exist" Jan 28 15:26:31 crc kubenswrapper[4981]: I0128 15:26:31.332295 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46b93ba2-9bbb-4754-8aee-7c588bc645de" path="/var/lib/kubelet/pods/46b93ba2-9bbb-4754-8aee-7c588bc645de/volumes" Jan 28 15:26:31 crc kubenswrapper[4981]: I0128 15:26:31.893612 4981 generic.go:334] "Generic (PLEG): container finished" podID="83f911a5-2f1f-4cc2-a2cb-74c94632dd94" containerID="5fd1b097816c0e53613d1f6d62157a9b9aed246b2bc8d7cbf4296b2a656b9266" exitCode=0 Jan 28 15:26:31 crc kubenswrapper[4981]: I0128 15:26:31.893729 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" event={"ID":"83f911a5-2f1f-4cc2-a2cb-74c94632dd94","Type":"ContainerDied","Data":"5fd1b097816c0e53613d1f6d62157a9b9aed246b2bc8d7cbf4296b2a656b9266"} Jan 28 15:26:32 crc kubenswrapper[4981]: I0128 15:26:32.910244 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" event={"ID":"83f911a5-2f1f-4cc2-a2cb-74c94632dd94","Type":"ContainerStarted","Data":"ed73e5f770899ebed942b1b1b7016d47e6c456d9b3767ffff8bd4f6262fa6ce0"} Jan 28 15:26:32 crc kubenswrapper[4981]: I0128 15:26:32.910764 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:32 crc kubenswrapper[4981]: I0128 15:26:32.947059 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" podStartSLOduration=3.947028764 podStartE2EDuration="3.947028764s" podCreationTimestamp="2026-01-28 15:26:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:26:32.938818443 +0000 UTC m=+1404.390976704" watchObservedRunningTime="2026-01-28 15:26:32.947028764 +0000 UTC m=+1404.399187055" Jan 28 15:26:34 crc kubenswrapper[4981]: I0128 15:26:34.766456 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v7dkc"] Jan 28 15:26:34 crc kubenswrapper[4981]: E0128 15:26:34.767149 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b93ba2-9bbb-4754-8aee-7c588bc645de" containerName="dnsmasq-dns" Jan 28 15:26:34 crc kubenswrapper[4981]: I0128 15:26:34.767162 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b93ba2-9bbb-4754-8aee-7c588bc645de" containerName="dnsmasq-dns" Jan 28 15:26:34 crc kubenswrapper[4981]: E0128 15:26:34.767175 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46b93ba2-9bbb-4754-8aee-7c588bc645de" containerName="init" Jan 28 15:26:34 crc kubenswrapper[4981]: I0128 15:26:34.767181 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="46b93ba2-9bbb-4754-8aee-7c588bc645de" containerName="init" Jan 28 15:26:34 crc kubenswrapper[4981]: I0128 15:26:34.767434 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="46b93ba2-9bbb-4754-8aee-7c588bc645de" containerName="dnsmasq-dns" Jan 28 15:26:34 crc kubenswrapper[4981]: I0128 15:26:34.768837 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:34 crc kubenswrapper[4981]: I0128 15:26:34.778928 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v7dkc"] Jan 28 15:26:34 crc kubenswrapper[4981]: I0128 15:26:34.867727 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ea7719-480d-4228-86b9-048e41849023-utilities\") pod \"community-operators-v7dkc\" (UID: \"c2ea7719-480d-4228-86b9-048e41849023\") " pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:34 crc kubenswrapper[4981]: I0128 15:26:34.867971 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpkqk\" (UniqueName: \"kubernetes.io/projected/c2ea7719-480d-4228-86b9-048e41849023-kube-api-access-rpkqk\") pod \"community-operators-v7dkc\" (UID: \"c2ea7719-480d-4228-86b9-048e41849023\") " pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:34 crc kubenswrapper[4981]: I0128 15:26:34.868261 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ea7719-480d-4228-86b9-048e41849023-catalog-content\") pod \"community-operators-v7dkc\" (UID: \"c2ea7719-480d-4228-86b9-048e41849023\") " pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:34 crc kubenswrapper[4981]: I0128 15:26:34.969964 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ea7719-480d-4228-86b9-048e41849023-catalog-content\") pod \"community-operators-v7dkc\" (UID: \"c2ea7719-480d-4228-86b9-048e41849023\") " pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:34 crc kubenswrapper[4981]: I0128 15:26:34.970081 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ea7719-480d-4228-86b9-048e41849023-utilities\") pod \"community-operators-v7dkc\" (UID: \"c2ea7719-480d-4228-86b9-048e41849023\") " pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:34 crc kubenswrapper[4981]: I0128 15:26:34.970225 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpkqk\" (UniqueName: \"kubernetes.io/projected/c2ea7719-480d-4228-86b9-048e41849023-kube-api-access-rpkqk\") pod \"community-operators-v7dkc\" (UID: \"c2ea7719-480d-4228-86b9-048e41849023\") " pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:34 crc kubenswrapper[4981]: I0128 15:26:34.970551 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ea7719-480d-4228-86b9-048e41849023-utilities\") pod \"community-operators-v7dkc\" (UID: \"c2ea7719-480d-4228-86b9-048e41849023\") " pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:34 crc kubenswrapper[4981]: I0128 15:26:34.970781 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ea7719-480d-4228-86b9-048e41849023-catalog-content\") pod \"community-operators-v7dkc\" (UID: \"c2ea7719-480d-4228-86b9-048e41849023\") " pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:35 crc kubenswrapper[4981]: I0128 15:26:34.989461 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpkqk\" (UniqueName: \"kubernetes.io/projected/c2ea7719-480d-4228-86b9-048e41849023-kube-api-access-rpkqk\") pod \"community-operators-v7dkc\" (UID: \"c2ea7719-480d-4228-86b9-048e41849023\") " pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:35 crc kubenswrapper[4981]: I0128 15:26:35.105418 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:36 crc kubenswrapper[4981]: W0128 15:26:35.659515 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2ea7719_480d_4228_86b9_048e41849023.slice/crio-5183e38d11f17300f085293d4657271d4db4d0da1bbedc9c87ed4fa8653c6b32 WatchSource:0}: Error finding container 5183e38d11f17300f085293d4657271d4db4d0da1bbedc9c87ed4fa8653c6b32: Status 404 returned error can't find the container with id 5183e38d11f17300f085293d4657271d4db4d0da1bbedc9c87ed4fa8653c6b32 Jan 28 15:26:36 crc kubenswrapper[4981]: I0128 15:26:35.677690 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v7dkc"] Jan 28 15:26:36 crc kubenswrapper[4981]: I0128 15:26:35.945799 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7dkc" event={"ID":"c2ea7719-480d-4228-86b9-048e41849023","Type":"ContainerStarted","Data":"d2c07638d8e0d08a7e1bf7193743bb5154b0a9aaa3e40530b3174bfa5f23804a"} Jan 28 15:26:36 crc kubenswrapper[4981]: I0128 15:26:35.946151 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7dkc" event={"ID":"c2ea7719-480d-4228-86b9-048e41849023","Type":"ContainerStarted","Data":"5183e38d11f17300f085293d4657271d4db4d0da1bbedc9c87ed4fa8653c6b32"} Jan 28 15:26:36 crc kubenswrapper[4981]: I0128 15:26:36.955729 4981 generic.go:334] "Generic (PLEG): container finished" podID="c2ea7719-480d-4228-86b9-048e41849023" containerID="d2c07638d8e0d08a7e1bf7193743bb5154b0a9aaa3e40530b3174bfa5f23804a" exitCode=0 Jan 28 15:26:36 crc kubenswrapper[4981]: I0128 15:26:36.955851 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7dkc" event={"ID":"c2ea7719-480d-4228-86b9-048e41849023","Type":"ContainerDied","Data":"d2c07638d8e0d08a7e1bf7193743bb5154b0a9aaa3e40530b3174bfa5f23804a"} Jan 28 15:26:37 crc kubenswrapper[4981]: I0128 15:26:37.968630 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7dkc" event={"ID":"c2ea7719-480d-4228-86b9-048e41849023","Type":"ContainerStarted","Data":"b83e6feafb8ffb44ea6dc2f9146a5ce820bbe5e76ce9c8fe4bc2c474e2cf6f94"} Jan 28 15:26:38 crc kubenswrapper[4981]: I0128 15:26:38.985271 4981 generic.go:334] "Generic (PLEG): container finished" podID="c2ea7719-480d-4228-86b9-048e41849023" containerID="b83e6feafb8ffb44ea6dc2f9146a5ce820bbe5e76ce9c8fe4bc2c474e2cf6f94" exitCode=0 Jan 28 15:26:38 crc kubenswrapper[4981]: I0128 15:26:38.985321 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7dkc" event={"ID":"c2ea7719-480d-4228-86b9-048e41849023","Type":"ContainerDied","Data":"b83e6feafb8ffb44ea6dc2f9146a5ce820bbe5e76ce9c8fe4bc2c474e2cf6f94"} Jan 28 15:26:39 crc kubenswrapper[4981]: I0128 15:26:39.995123 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7dkc" event={"ID":"c2ea7719-480d-4228-86b9-048e41849023","Type":"ContainerStarted","Data":"dde22cc40e2b501a853c81d7ddacfd1e7c13da7f1ffdaa4ad95cda7867d86fd2"} Jan 28 15:26:40 crc kubenswrapper[4981]: I0128 15:26:40.012408 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v7dkc" podStartSLOduration=3.5366271190000003 podStartE2EDuration="6.012390709s" podCreationTimestamp="2026-01-28 15:26:34 +0000 UTC" firstStartedPulling="2026-01-28 15:26:36.957804508 +0000 UTC m=+1408.409962749" lastFinishedPulling="2026-01-28 15:26:39.433568058 +0000 UTC m=+1410.885726339" observedRunningTime="2026-01-28 15:26:40.0098312 +0000 UTC m=+1411.461989501" watchObservedRunningTime="2026-01-28 15:26:40.012390709 +0000 UTC m=+1411.464548950" Jan 28 15:26:40 crc kubenswrapper[4981]: I0128 15:26:40.284910 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb6ffcf87-t7szx" Jan 28 15:26:40 crc kubenswrapper[4981]: I0128 15:26:40.349797 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4wpk8"] Jan 28 15:26:40 crc kubenswrapper[4981]: I0128 15:26:40.350334 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" podUID="20edd6cf-e425-4544-9d68-0523586dd434" containerName="dnsmasq-dns" containerID="cri-o://2e0c96ce68b37ec50280d97bcff9945573b4995524c56e25550ce637c9b79985" gracePeriod=10 Jan 28 15:26:40 crc kubenswrapper[4981]: I0128 15:26:40.955590 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.007619 4981 generic.go:334] "Generic (PLEG): container finished" podID="20edd6cf-e425-4544-9d68-0523586dd434" containerID="2e0c96ce68b37ec50280d97bcff9945573b4995524c56e25550ce637c9b79985" exitCode=0 Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.008235 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.008310 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" event={"ID":"20edd6cf-e425-4544-9d68-0523586dd434","Type":"ContainerDied","Data":"2e0c96ce68b37ec50280d97bcff9945573b4995524c56e25550ce637c9b79985"} Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.008387 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67b789f86c-4wpk8" event={"ID":"20edd6cf-e425-4544-9d68-0523586dd434","Type":"ContainerDied","Data":"ff60f75fc82290ec3d1ee7c57378bf66e9ac9172f1e7033c99e3a6eed9f3ba26"} Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.008411 4981 scope.go:117] "RemoveContainer" containerID="2e0c96ce68b37ec50280d97bcff9945573b4995524c56e25550ce637c9b79985" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.051633 4981 scope.go:117] "RemoveContainer" containerID="c94d15df5d98036b72e2667ebc5c90013be36926cda74791f7448fc7002087c6" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.084092 4981 scope.go:117] "RemoveContainer" containerID="2e0c96ce68b37ec50280d97bcff9945573b4995524c56e25550ce637c9b79985" Jan 28 15:26:41 crc kubenswrapper[4981]: E0128 15:26:41.084711 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e0c96ce68b37ec50280d97bcff9945573b4995524c56e25550ce637c9b79985\": container with ID starting with 2e0c96ce68b37ec50280d97bcff9945573b4995524c56e25550ce637c9b79985 not found: ID does not exist" containerID="2e0c96ce68b37ec50280d97bcff9945573b4995524c56e25550ce637c9b79985" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.084768 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e0c96ce68b37ec50280d97bcff9945573b4995524c56e25550ce637c9b79985"} err="failed to get container status \"2e0c96ce68b37ec50280d97bcff9945573b4995524c56e25550ce637c9b79985\": rpc error: code = NotFound desc = could not find container \"2e0c96ce68b37ec50280d97bcff9945573b4995524c56e25550ce637c9b79985\": container with ID starting with 2e0c96ce68b37ec50280d97bcff9945573b4995524c56e25550ce637c9b79985 not found: ID does not exist" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.084803 4981 scope.go:117] "RemoveContainer" containerID="c94d15df5d98036b72e2667ebc5c90013be36926cda74791f7448fc7002087c6" Jan 28 15:26:41 crc kubenswrapper[4981]: E0128 15:26:41.085146 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c94d15df5d98036b72e2667ebc5c90013be36926cda74791f7448fc7002087c6\": container with ID starting with c94d15df5d98036b72e2667ebc5c90013be36926cda74791f7448fc7002087c6 not found: ID does not exist" containerID="c94d15df5d98036b72e2667ebc5c90013be36926cda74791f7448fc7002087c6" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.085216 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c94d15df5d98036b72e2667ebc5c90013be36926cda74791f7448fc7002087c6"} err="failed to get container status \"c94d15df5d98036b72e2667ebc5c90013be36926cda74791f7448fc7002087c6\": rpc error: code = NotFound desc = could not find container \"c94d15df5d98036b72e2667ebc5c90013be36926cda74791f7448fc7002087c6\": container with ID starting with c94d15df5d98036b72e2667ebc5c90013be36926cda74791f7448fc7002087c6 not found: ID does not exist" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.093471 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-config\") pod \"20edd6cf-e425-4544-9d68-0523586dd434\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.093526 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rfrk\" (UniqueName: \"kubernetes.io/projected/20edd6cf-e425-4544-9d68-0523586dd434-kube-api-access-5rfrk\") pod \"20edd6cf-e425-4544-9d68-0523586dd434\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.093563 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-dns-svc\") pod \"20edd6cf-e425-4544-9d68-0523586dd434\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.093621 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-ovsdbserver-sb\") pod \"20edd6cf-e425-4544-9d68-0523586dd434\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.093705 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-openstack-edpm-ipam\") pod \"20edd6cf-e425-4544-9d68-0523586dd434\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.093823 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-ovsdbserver-nb\") pod \"20edd6cf-e425-4544-9d68-0523586dd434\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.093884 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-dns-swift-storage-0\") pod \"20edd6cf-e425-4544-9d68-0523586dd434\" (UID: \"20edd6cf-e425-4544-9d68-0523586dd434\") " Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.102096 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20edd6cf-e425-4544-9d68-0523586dd434-kube-api-access-5rfrk" (OuterVolumeSpecName: "kube-api-access-5rfrk") pod "20edd6cf-e425-4544-9d68-0523586dd434" (UID: "20edd6cf-e425-4544-9d68-0523586dd434"). InnerVolumeSpecName "kube-api-access-5rfrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.153256 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-config" (OuterVolumeSpecName: "config") pod "20edd6cf-e425-4544-9d68-0523586dd434" (UID: "20edd6cf-e425-4544-9d68-0523586dd434"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.156452 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "20edd6cf-e425-4544-9d68-0523586dd434" (UID: "20edd6cf-e425-4544-9d68-0523586dd434"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.157454 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "20edd6cf-e425-4544-9d68-0523586dd434" (UID: "20edd6cf-e425-4544-9d68-0523586dd434"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.163907 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "20edd6cf-e425-4544-9d68-0523586dd434" (UID: "20edd6cf-e425-4544-9d68-0523586dd434"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.167843 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "20edd6cf-e425-4544-9d68-0523586dd434" (UID: "20edd6cf-e425-4544-9d68-0523586dd434"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.171227 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "20edd6cf-e425-4544-9d68-0523586dd434" (UID: "20edd6cf-e425-4544-9d68-0523586dd434"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.196766 4981 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.196808 4981 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-config\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.196821 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rfrk\" (UniqueName: \"kubernetes.io/projected/20edd6cf-e425-4544-9d68-0523586dd434-kube-api-access-5rfrk\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.196836 4981 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.196845 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.196856 4981 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.196864 4981 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/20edd6cf-e425-4544-9d68-0523586dd434-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.351008 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4wpk8"] Jan 28 15:26:41 crc kubenswrapper[4981]: I0128 15:26:41.376602 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67b789f86c-4wpk8"] Jan 28 15:26:43 crc kubenswrapper[4981]: I0128 15:26:43.329944 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20edd6cf-e425-4544-9d68-0523586dd434" path="/var/lib/kubelet/pods/20edd6cf-e425-4544-9d68-0523586dd434/volumes" Jan 28 15:26:45 crc kubenswrapper[4981]: I0128 15:26:45.106077 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:45 crc kubenswrapper[4981]: I0128 15:26:45.106447 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:45 crc kubenswrapper[4981]: I0128 15:26:45.190565 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:46 crc kubenswrapper[4981]: I0128 15:26:46.105501 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:48 crc kubenswrapper[4981]: I0128 15:26:48.360372 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v7dkc"] Jan 28 15:26:48 crc kubenswrapper[4981]: I0128 15:26:48.360720 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v7dkc" podUID="c2ea7719-480d-4228-86b9-048e41849023" containerName="registry-server" containerID="cri-o://dde22cc40e2b501a853c81d7ddacfd1e7c13da7f1ffdaa4ad95cda7867d86fd2" gracePeriod=2 Jan 28 15:26:48 crc kubenswrapper[4981]: I0128 15:26:48.846819 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:48 crc kubenswrapper[4981]: I0128 15:26:48.940166 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ea7719-480d-4228-86b9-048e41849023-catalog-content\") pod \"c2ea7719-480d-4228-86b9-048e41849023\" (UID: \"c2ea7719-480d-4228-86b9-048e41849023\") " Jan 28 15:26:48 crc kubenswrapper[4981]: I0128 15:26:48.940441 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ea7719-480d-4228-86b9-048e41849023-utilities\") pod \"c2ea7719-480d-4228-86b9-048e41849023\" (UID: \"c2ea7719-480d-4228-86b9-048e41849023\") " Jan 28 15:26:48 crc kubenswrapper[4981]: I0128 15:26:48.940473 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpkqk\" (UniqueName: \"kubernetes.io/projected/c2ea7719-480d-4228-86b9-048e41849023-kube-api-access-rpkqk\") pod \"c2ea7719-480d-4228-86b9-048e41849023\" (UID: \"c2ea7719-480d-4228-86b9-048e41849023\") " Jan 28 15:26:48 crc kubenswrapper[4981]: I0128 15:26:48.942036 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2ea7719-480d-4228-86b9-048e41849023-utilities" (OuterVolumeSpecName: "utilities") pod "c2ea7719-480d-4228-86b9-048e41849023" (UID: "c2ea7719-480d-4228-86b9-048e41849023"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:26:48 crc kubenswrapper[4981]: I0128 15:26:48.949345 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ea7719-480d-4228-86b9-048e41849023-kube-api-access-rpkqk" (OuterVolumeSpecName: "kube-api-access-rpkqk") pod "c2ea7719-480d-4228-86b9-048e41849023" (UID: "c2ea7719-480d-4228-86b9-048e41849023"). InnerVolumeSpecName "kube-api-access-rpkqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:26:48 crc kubenswrapper[4981]: I0128 15:26:48.988222 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2ea7719-480d-4228-86b9-048e41849023-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2ea7719-480d-4228-86b9-048e41849023" (UID: "c2ea7719-480d-4228-86b9-048e41849023"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.042472 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ea7719-480d-4228-86b9-048e41849023-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.042502 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpkqk\" (UniqueName: \"kubernetes.io/projected/c2ea7719-480d-4228-86b9-048e41849023-kube-api-access-rpkqk\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.042513 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ea7719-480d-4228-86b9-048e41849023-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.094447 4981 generic.go:334] "Generic (PLEG): container finished" podID="c2ea7719-480d-4228-86b9-048e41849023" containerID="dde22cc40e2b501a853c81d7ddacfd1e7c13da7f1ffdaa4ad95cda7867d86fd2" exitCode=0 Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.094491 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7dkc" event={"ID":"c2ea7719-480d-4228-86b9-048e41849023","Type":"ContainerDied","Data":"dde22cc40e2b501a853c81d7ddacfd1e7c13da7f1ffdaa4ad95cda7867d86fd2"} Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.094525 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7dkc" event={"ID":"c2ea7719-480d-4228-86b9-048e41849023","Type":"ContainerDied","Data":"5183e38d11f17300f085293d4657271d4db4d0da1bbedc9c87ed4fa8653c6b32"} Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.094544 4981 scope.go:117] "RemoveContainer" containerID="dde22cc40e2b501a853c81d7ddacfd1e7c13da7f1ffdaa4ad95cda7867d86fd2" Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.094566 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7dkc" Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.129939 4981 scope.go:117] "RemoveContainer" containerID="b83e6feafb8ffb44ea6dc2f9146a5ce820bbe5e76ce9c8fe4bc2c474e2cf6f94" Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.166938 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v7dkc"] Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.178639 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v7dkc"] Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.344639 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2ea7719-480d-4228-86b9-048e41849023" path="/var/lib/kubelet/pods/c2ea7719-480d-4228-86b9-048e41849023/volumes" Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.356140 4981 scope.go:117] "RemoveContainer" containerID="d2c07638d8e0d08a7e1bf7193743bb5154b0a9aaa3e40530b3174bfa5f23804a" Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.389809 4981 scope.go:117] "RemoveContainer" containerID="dde22cc40e2b501a853c81d7ddacfd1e7c13da7f1ffdaa4ad95cda7867d86fd2" Jan 28 15:26:49 crc kubenswrapper[4981]: E0128 15:26:49.390374 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dde22cc40e2b501a853c81d7ddacfd1e7c13da7f1ffdaa4ad95cda7867d86fd2\": container with ID starting with dde22cc40e2b501a853c81d7ddacfd1e7c13da7f1ffdaa4ad95cda7867d86fd2 not found: ID does not exist" containerID="dde22cc40e2b501a853c81d7ddacfd1e7c13da7f1ffdaa4ad95cda7867d86fd2" Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.390405 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dde22cc40e2b501a853c81d7ddacfd1e7c13da7f1ffdaa4ad95cda7867d86fd2"} err="failed to get container status \"dde22cc40e2b501a853c81d7ddacfd1e7c13da7f1ffdaa4ad95cda7867d86fd2\": rpc error: code = NotFound desc = could not find container \"dde22cc40e2b501a853c81d7ddacfd1e7c13da7f1ffdaa4ad95cda7867d86fd2\": container with ID starting with dde22cc40e2b501a853c81d7ddacfd1e7c13da7f1ffdaa4ad95cda7867d86fd2 not found: ID does not exist" Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.390429 4981 scope.go:117] "RemoveContainer" containerID="b83e6feafb8ffb44ea6dc2f9146a5ce820bbe5e76ce9c8fe4bc2c474e2cf6f94" Jan 28 15:26:49 crc kubenswrapper[4981]: E0128 15:26:49.395315 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b83e6feafb8ffb44ea6dc2f9146a5ce820bbe5e76ce9c8fe4bc2c474e2cf6f94\": container with ID starting with b83e6feafb8ffb44ea6dc2f9146a5ce820bbe5e76ce9c8fe4bc2c474e2cf6f94 not found: ID does not exist" containerID="b83e6feafb8ffb44ea6dc2f9146a5ce820bbe5e76ce9c8fe4bc2c474e2cf6f94" Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.395348 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b83e6feafb8ffb44ea6dc2f9146a5ce820bbe5e76ce9c8fe4bc2c474e2cf6f94"} err="failed to get container status \"b83e6feafb8ffb44ea6dc2f9146a5ce820bbe5e76ce9c8fe4bc2c474e2cf6f94\": rpc error: code = NotFound desc = could not find container \"b83e6feafb8ffb44ea6dc2f9146a5ce820bbe5e76ce9c8fe4bc2c474e2cf6f94\": container with ID starting with b83e6feafb8ffb44ea6dc2f9146a5ce820bbe5e76ce9c8fe4bc2c474e2cf6f94 not found: ID does not exist" Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.395392 4981 scope.go:117] "RemoveContainer" containerID="d2c07638d8e0d08a7e1bf7193743bb5154b0a9aaa3e40530b3174bfa5f23804a" Jan 28 15:26:49 crc kubenswrapper[4981]: E0128 15:26:49.395789 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2c07638d8e0d08a7e1bf7193743bb5154b0a9aaa3e40530b3174bfa5f23804a\": container with ID starting with d2c07638d8e0d08a7e1bf7193743bb5154b0a9aaa3e40530b3174bfa5f23804a not found: ID does not exist" containerID="d2c07638d8e0d08a7e1bf7193743bb5154b0a9aaa3e40530b3174bfa5f23804a" Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.395855 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2c07638d8e0d08a7e1bf7193743bb5154b0a9aaa3e40530b3174bfa5f23804a"} err="failed to get container status \"d2c07638d8e0d08a7e1bf7193743bb5154b0a9aaa3e40530b3174bfa5f23804a\": rpc error: code = NotFound desc = could not find container \"d2c07638d8e0d08a7e1bf7193743bb5154b0a9aaa3e40530b3174bfa5f23804a\": container with ID starting with d2c07638d8e0d08a7e1bf7193743bb5154b0a9aaa3e40530b3174bfa5f23804a not found: ID does not exist" Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.898287 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:26:49 crc kubenswrapper[4981]: I0128 15:26:49.898355 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.459632 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd"] Jan 28 15:26:53 crc kubenswrapper[4981]: E0128 15:26:53.460379 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20edd6cf-e425-4544-9d68-0523586dd434" containerName="init" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.460397 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="20edd6cf-e425-4544-9d68-0523586dd434" containerName="init" Jan 28 15:26:53 crc kubenswrapper[4981]: E0128 15:26:53.460415 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ea7719-480d-4228-86b9-048e41849023" containerName="extract-content" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.460422 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ea7719-480d-4228-86b9-048e41849023" containerName="extract-content" Jan 28 15:26:53 crc kubenswrapper[4981]: E0128 15:26:53.460434 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ea7719-480d-4228-86b9-048e41849023" containerName="registry-server" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.460442 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ea7719-480d-4228-86b9-048e41849023" containerName="registry-server" Jan 28 15:26:53 crc kubenswrapper[4981]: E0128 15:26:53.460461 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ea7719-480d-4228-86b9-048e41849023" containerName="extract-utilities" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.460470 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ea7719-480d-4228-86b9-048e41849023" containerName="extract-utilities" Jan 28 15:26:53 crc kubenswrapper[4981]: E0128 15:26:53.460497 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20edd6cf-e425-4544-9d68-0523586dd434" containerName="dnsmasq-dns" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.460504 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="20edd6cf-e425-4544-9d68-0523586dd434" containerName="dnsmasq-dns" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.460737 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2ea7719-480d-4228-86b9-048e41849023" containerName="registry-server" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.460756 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="20edd6cf-e425-4544-9d68-0523586dd434" containerName="dnsmasq-dns" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.461505 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.464305 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.465202 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.465396 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.475683 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.476331 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd"] Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.527085 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxtsz\" (UniqueName: \"kubernetes.io/projected/4db01d71-54cf-49d3-a603-09ee1687a0d6-kube-api-access-qxtsz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.527420 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.527620 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.527717 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.630235 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxtsz\" (UniqueName: \"kubernetes.io/projected/4db01d71-54cf-49d3-a603-09ee1687a0d6-kube-api-access-qxtsz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.630310 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.630408 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.630450 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.637821 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.637936 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.648732 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxtsz\" (UniqueName: \"kubernetes.io/projected/4db01d71-54cf-49d3-a603-09ee1687a0d6-kube-api-access-qxtsz\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.649735 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:26:53 crc kubenswrapper[4981]: I0128 15:26:53.833823 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:26:54 crc kubenswrapper[4981]: I0128 15:26:54.429689 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd"] Jan 28 15:26:55 crc kubenswrapper[4981]: I0128 15:26:55.188349 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" event={"ID":"4db01d71-54cf-49d3-a603-09ee1687a0d6","Type":"ContainerStarted","Data":"ff24bf381572629f59831c6c4acfbced743e39c98fba8bcec4fbd0b315da686f"} Jan 28 15:26:57 crc kubenswrapper[4981]: I0128 15:26:57.213388 4981 generic.go:334] "Generic (PLEG): container finished" podID="ead14a8f-5759-4a7c-b8a4-6560131c28d1" containerID="afd51ee20ac9edd19f29689c40ff8fd3e7e0696a7cb233beb9ca374843bb5029" exitCode=0 Jan 28 15:26:57 crc kubenswrapper[4981]: I0128 15:26:57.213446 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ead14a8f-5759-4a7c-b8a4-6560131c28d1","Type":"ContainerDied","Data":"afd51ee20ac9edd19f29689c40ff8fd3e7e0696a7cb233beb9ca374843bb5029"} Jan 28 15:27:00 crc kubenswrapper[4981]: I0128 15:27:00.276963 4981 generic.go:334] "Generic (PLEG): container finished" podID="8c327a08-ce8c-42f7-b305-cfc8b7f2d644" containerID="c4006fc0e5bbb20178f04e055b9e90a949f0b6d1b60add104f68df6946608ebd" exitCode=0 Jan 28 15:27:00 crc kubenswrapper[4981]: I0128 15:27:00.277002 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c327a08-ce8c-42f7-b305-cfc8b7f2d644","Type":"ContainerDied","Data":"c4006fc0e5bbb20178f04e055b9e90a949f0b6d1b60add104f68df6946608ebd"} Jan 28 15:27:08 crc kubenswrapper[4981]: E0128 15:27:08.517463 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 28 15:27:08 crc kubenswrapper[4981]: E0128 15:27:08.518699 4981 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 15:27:08 crc kubenswrapper[4981]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 28 15:27:08 crc kubenswrapper[4981]: - hosts: all Jan 28 15:27:08 crc kubenswrapper[4981]: strategy: linear Jan 28 15:27:08 crc kubenswrapper[4981]: tasks: Jan 28 15:27:08 crc kubenswrapper[4981]: - name: Enable podified-repos Jan 28 15:27:08 crc kubenswrapper[4981]: become: true Jan 28 15:27:08 crc kubenswrapper[4981]: ansible.builtin.shell: | Jan 28 15:27:08 crc kubenswrapper[4981]: set -euxo pipefail Jan 28 15:27:08 crc kubenswrapper[4981]: pushd /var/tmp Jan 28 15:27:08 crc kubenswrapper[4981]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 28 15:27:08 crc kubenswrapper[4981]: pushd repo-setup-main Jan 28 15:27:08 crc kubenswrapper[4981]: python3 -m venv ./venv Jan 28 15:27:08 crc kubenswrapper[4981]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 28 15:27:08 crc kubenswrapper[4981]: ./venv/bin/repo-setup current-podified -b antelope Jan 28 15:27:08 crc kubenswrapper[4981]: popd Jan 28 15:27:08 crc kubenswrapper[4981]: rm -rf repo-setup-main Jan 28 15:27:08 crc kubenswrapper[4981]: Jan 28 15:27:08 crc kubenswrapper[4981]: Jan 28 15:27:08 crc kubenswrapper[4981]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 28 15:27:08 crc kubenswrapper[4981]: edpm_override_hosts: openstack-edpm-ipam Jan 28 15:27:08 crc kubenswrapper[4981]: edpm_service_type: repo-setup Jan 28 15:27:08 crc kubenswrapper[4981]: Jan 28 15:27:08 crc kubenswrapper[4981]: Jan 28 15:27:08 crc kubenswrapper[4981]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qxtsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd_openstack(4db01d71-54cf-49d3-a603-09ee1687a0d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 28 15:27:08 crc kubenswrapper[4981]: > logger="UnhandledError" Jan 28 15:27:08 crc kubenswrapper[4981]: E0128 15:27:08.520499 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" podUID="4db01d71-54cf-49d3-a603-09ee1687a0d6" Jan 28 15:27:09 crc kubenswrapper[4981]: I0128 15:27:09.374294 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"ead14a8f-5759-4a7c-b8a4-6560131c28d1","Type":"ContainerStarted","Data":"cc00adcb6b262ad0928a5be372465bd49a3dd4a473f6facc7896faf7432bd9ed"} Jan 28 15:27:09 crc kubenswrapper[4981]: I0128 15:27:09.376534 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"8c327a08-ce8c-42f7-b305-cfc8b7f2d644","Type":"ContainerStarted","Data":"fc1ecfbe875691104bd0e458ce4e448d497f89988a8fb222dfb00707ff111cf5"} Jan 28 15:27:09 crc kubenswrapper[4981]: I0128 15:27:09.376821 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:27:09 crc kubenswrapper[4981]: E0128 15:27:09.378144 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" podUID="4db01d71-54cf-49d3-a603-09ee1687a0d6" Jan 28 15:27:09 crc kubenswrapper[4981]: I0128 15:27:09.418640 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=48.418615486 podStartE2EDuration="48.418615486s" podCreationTimestamp="2026-01-28 15:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:27:09.415020462 +0000 UTC m=+1440.867178713" watchObservedRunningTime="2026-01-28 15:27:09.418615486 +0000 UTC m=+1440.870773757" Jan 28 15:27:09 crc kubenswrapper[4981]: I0128 15:27:09.443510 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=47.443494285 podStartE2EDuration="47.443494285s" podCreationTimestamp="2026-01-28 15:26:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 15:27:09.442620882 +0000 UTC m=+1440.894779123" watchObservedRunningTime="2026-01-28 15:27:09.443494285 +0000 UTC m=+1440.895652536" Jan 28 15:27:11 crc kubenswrapper[4981]: I0128 15:27:11.996706 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 15:27:19 crc kubenswrapper[4981]: I0128 15:27:19.897683 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:27:19 crc kubenswrapper[4981]: I0128 15:27:19.898000 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:27:20 crc kubenswrapper[4981]: I0128 15:27:20.790249 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:27:21 crc kubenswrapper[4981]: I0128 15:27:21.504965 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" event={"ID":"4db01d71-54cf-49d3-a603-09ee1687a0d6","Type":"ContainerStarted","Data":"2438d4b45dd13ac5ba034bf14d886621820b68eb67d7fff8c1e37880651fb1f2"} Jan 28 15:27:21 crc kubenswrapper[4981]: I0128 15:27:21.531783 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" podStartSLOduration=2.180914213 podStartE2EDuration="28.53176561s" podCreationTimestamp="2026-01-28 15:26:53 +0000 UTC" firstStartedPulling="2026-01-28 15:26:54.437002665 +0000 UTC m=+1425.889160906" lastFinishedPulling="2026-01-28 15:27:20.787854052 +0000 UTC m=+1452.240012303" observedRunningTime="2026-01-28 15:27:21.526721179 +0000 UTC m=+1452.978879420" watchObservedRunningTime="2026-01-28 15:27:21.53176561 +0000 UTC m=+1452.983923851" Jan 28 15:27:22 crc kubenswrapper[4981]: I0128 15:27:22.000467 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 15:27:23 crc kubenswrapper[4981]: I0128 15:27:23.170015 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 15:27:30 crc kubenswrapper[4981]: I0128 15:27:30.700949 4981 scope.go:117] "RemoveContainer" containerID="3dfd3c0b41fc2054bb12a9b0e6266a0b848761f45f1a6627405d4920c9e8018c" Jan 28 15:27:30 crc kubenswrapper[4981]: I0128 15:27:30.742041 4981 scope.go:117] "RemoveContainer" containerID="6878b04f7d9757c06a27119a4d86587d52d67b084b4c2e7b7f9a4332f1fd6c91" Jan 28 15:27:32 crc kubenswrapper[4981]: I0128 15:27:32.630410 4981 generic.go:334] "Generic (PLEG): container finished" podID="4db01d71-54cf-49d3-a603-09ee1687a0d6" containerID="2438d4b45dd13ac5ba034bf14d886621820b68eb67d7fff8c1e37880651fb1f2" exitCode=0 Jan 28 15:27:32 crc kubenswrapper[4981]: I0128 15:27:32.630502 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" event={"ID":"4db01d71-54cf-49d3-a603-09ee1687a0d6","Type":"ContainerDied","Data":"2438d4b45dd13ac5ba034bf14d886621820b68eb67d7fff8c1e37880651fb1f2"} Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.115122 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.193997 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxtsz\" (UniqueName: \"kubernetes.io/projected/4db01d71-54cf-49d3-a603-09ee1687a0d6-kube-api-access-qxtsz\") pod \"4db01d71-54cf-49d3-a603-09ee1687a0d6\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.194049 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-repo-setup-combined-ca-bundle\") pod \"4db01d71-54cf-49d3-a603-09ee1687a0d6\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.194149 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-inventory\") pod \"4db01d71-54cf-49d3-a603-09ee1687a0d6\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.194323 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-ssh-key-openstack-edpm-ipam\") pod \"4db01d71-54cf-49d3-a603-09ee1687a0d6\" (UID: \"4db01d71-54cf-49d3-a603-09ee1687a0d6\") " Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.199617 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "4db01d71-54cf-49d3-a603-09ee1687a0d6" (UID: "4db01d71-54cf-49d3-a603-09ee1687a0d6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.200444 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4db01d71-54cf-49d3-a603-09ee1687a0d6-kube-api-access-qxtsz" (OuterVolumeSpecName: "kube-api-access-qxtsz") pod "4db01d71-54cf-49d3-a603-09ee1687a0d6" (UID: "4db01d71-54cf-49d3-a603-09ee1687a0d6"). InnerVolumeSpecName "kube-api-access-qxtsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.220166 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4db01d71-54cf-49d3-a603-09ee1687a0d6" (UID: "4db01d71-54cf-49d3-a603-09ee1687a0d6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.239733 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-inventory" (OuterVolumeSpecName: "inventory") pod "4db01d71-54cf-49d3-a603-09ee1687a0d6" (UID: "4db01d71-54cf-49d3-a603-09ee1687a0d6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.296494 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.296547 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxtsz\" (UniqueName: \"kubernetes.io/projected/4db01d71-54cf-49d3-a603-09ee1687a0d6-kube-api-access-qxtsz\") on node \"crc\" DevicePath \"\"" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.296557 4981 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.296566 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4db01d71-54cf-49d3-a603-09ee1687a0d6-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.652577 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" event={"ID":"4db01d71-54cf-49d3-a603-09ee1687a0d6","Type":"ContainerDied","Data":"ff24bf381572629f59831c6c4acfbced743e39c98fba8bcec4fbd0b315da686f"} Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.652898 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff24bf381572629f59831c6c4acfbced743e39c98fba8bcec4fbd0b315da686f" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.653038 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd" Jan 28 15:27:34 crc kubenswrapper[4981]: E0128 15:27:34.747285 4981 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4db01d71_54cf_49d3_a603_09ee1687a0d6.slice/crio-ff24bf381572629f59831c6c4acfbced743e39c98fba8bcec4fbd0b315da686f\": RecentStats: unable to find data in memory cache]" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.750249 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c"] Jan 28 15:27:34 crc kubenswrapper[4981]: E0128 15:27:34.750725 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4db01d71-54cf-49d3-a603-09ee1687a0d6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.750754 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4db01d71-54cf-49d3-a603-09ee1687a0d6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.750983 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="4db01d71-54cf-49d3-a603-09ee1687a0d6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.751817 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.754178 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.754431 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.754513 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.754620 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.799932 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c"] Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.806882 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f96c624-5794-4657-b6b9-00cccf2ac699-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pp86c\" (UID: \"7f96c624-5794-4657-b6b9-00cccf2ac699\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.806934 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gmhg\" (UniqueName: \"kubernetes.io/projected/7f96c624-5794-4657-b6b9-00cccf2ac699-kube-api-access-6gmhg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pp86c\" (UID: \"7f96c624-5794-4657-b6b9-00cccf2ac699\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.807023 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f96c624-5794-4657-b6b9-00cccf2ac699-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pp86c\" (UID: \"7f96c624-5794-4657-b6b9-00cccf2ac699\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.908502 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f96c624-5794-4657-b6b9-00cccf2ac699-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pp86c\" (UID: \"7f96c624-5794-4657-b6b9-00cccf2ac699\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.908552 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gmhg\" (UniqueName: \"kubernetes.io/projected/7f96c624-5794-4657-b6b9-00cccf2ac699-kube-api-access-6gmhg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pp86c\" (UID: \"7f96c624-5794-4657-b6b9-00cccf2ac699\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.908616 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f96c624-5794-4657-b6b9-00cccf2ac699-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pp86c\" (UID: \"7f96c624-5794-4657-b6b9-00cccf2ac699\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.914104 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f96c624-5794-4657-b6b9-00cccf2ac699-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pp86c\" (UID: \"7f96c624-5794-4657-b6b9-00cccf2ac699\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.914760 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f96c624-5794-4657-b6b9-00cccf2ac699-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pp86c\" (UID: \"7f96c624-5794-4657-b6b9-00cccf2ac699\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" Jan 28 15:27:34 crc kubenswrapper[4981]: I0128 15:27:34.924895 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gmhg\" (UniqueName: \"kubernetes.io/projected/7f96c624-5794-4657-b6b9-00cccf2ac699-kube-api-access-6gmhg\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-pp86c\" (UID: \"7f96c624-5794-4657-b6b9-00cccf2ac699\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" Jan 28 15:27:35 crc kubenswrapper[4981]: I0128 15:27:35.114646 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" Jan 28 15:27:35 crc kubenswrapper[4981]: I0128 15:27:35.763081 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c"] Jan 28 15:27:36 crc kubenswrapper[4981]: I0128 15:27:36.673942 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" event={"ID":"7f96c624-5794-4657-b6b9-00cccf2ac699","Type":"ContainerStarted","Data":"8cbb4bdeceafae0549cce7acd219cf567c6a5d7b68a754e6707bcc9f9ff18aff"} Jan 28 15:27:36 crc kubenswrapper[4981]: I0128 15:27:36.674468 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" event={"ID":"7f96c624-5794-4657-b6b9-00cccf2ac699","Type":"ContainerStarted","Data":"0e9bd8dc84d480e63b09ca46d69cdce0af9e04a5871cea606d369545c5f5d6cd"} Jan 28 15:27:36 crc kubenswrapper[4981]: I0128 15:27:36.701441 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" podStartSLOduration=2.220157542 podStartE2EDuration="2.701415188s" podCreationTimestamp="2026-01-28 15:27:34 +0000 UTC" firstStartedPulling="2026-01-28 15:27:35.787662008 +0000 UTC m=+1467.239820249" lastFinishedPulling="2026-01-28 15:27:36.268919634 +0000 UTC m=+1467.721077895" observedRunningTime="2026-01-28 15:27:36.693167552 +0000 UTC m=+1468.145325783" watchObservedRunningTime="2026-01-28 15:27:36.701415188 +0000 UTC m=+1468.153573459" Jan 28 15:27:39 crc kubenswrapper[4981]: I0128 15:27:39.705559 4981 generic.go:334] "Generic (PLEG): container finished" podID="7f96c624-5794-4657-b6b9-00cccf2ac699" containerID="8cbb4bdeceafae0549cce7acd219cf567c6a5d7b68a754e6707bcc9f9ff18aff" exitCode=0 Jan 28 15:27:39 crc kubenswrapper[4981]: I0128 15:27:39.705663 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" event={"ID":"7f96c624-5794-4657-b6b9-00cccf2ac699","Type":"ContainerDied","Data":"8cbb4bdeceafae0549cce7acd219cf567c6a5d7b68a754e6707bcc9f9ff18aff"} Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.168292 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.247422 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f96c624-5794-4657-b6b9-00cccf2ac699-ssh-key-openstack-edpm-ipam\") pod \"7f96c624-5794-4657-b6b9-00cccf2ac699\" (UID: \"7f96c624-5794-4657-b6b9-00cccf2ac699\") " Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.247499 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f96c624-5794-4657-b6b9-00cccf2ac699-inventory\") pod \"7f96c624-5794-4657-b6b9-00cccf2ac699\" (UID: \"7f96c624-5794-4657-b6b9-00cccf2ac699\") " Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.247682 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gmhg\" (UniqueName: \"kubernetes.io/projected/7f96c624-5794-4657-b6b9-00cccf2ac699-kube-api-access-6gmhg\") pod \"7f96c624-5794-4657-b6b9-00cccf2ac699\" (UID: \"7f96c624-5794-4657-b6b9-00cccf2ac699\") " Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.254005 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f96c624-5794-4657-b6b9-00cccf2ac699-kube-api-access-6gmhg" (OuterVolumeSpecName: "kube-api-access-6gmhg") pod "7f96c624-5794-4657-b6b9-00cccf2ac699" (UID: "7f96c624-5794-4657-b6b9-00cccf2ac699"). InnerVolumeSpecName "kube-api-access-6gmhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.278900 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f96c624-5794-4657-b6b9-00cccf2ac699-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7f96c624-5794-4657-b6b9-00cccf2ac699" (UID: "7f96c624-5794-4657-b6b9-00cccf2ac699"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.280340 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f96c624-5794-4657-b6b9-00cccf2ac699-inventory" (OuterVolumeSpecName: "inventory") pod "7f96c624-5794-4657-b6b9-00cccf2ac699" (UID: "7f96c624-5794-4657-b6b9-00cccf2ac699"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.349374 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7f96c624-5794-4657-b6b9-00cccf2ac699-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.349399 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f96c624-5794-4657-b6b9-00cccf2ac699-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.349407 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gmhg\" (UniqueName: \"kubernetes.io/projected/7f96c624-5794-4657-b6b9-00cccf2ac699-kube-api-access-6gmhg\") on node \"crc\" DevicePath \"\"" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.734062 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" event={"ID":"7f96c624-5794-4657-b6b9-00cccf2ac699","Type":"ContainerDied","Data":"0e9bd8dc84d480e63b09ca46d69cdce0af9e04a5871cea606d369545c5f5d6cd"} Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.734122 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e9bd8dc84d480e63b09ca46d69cdce0af9e04a5871cea606d369545c5f5d6cd" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.734210 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-pp86c" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.818809 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x"] Jan 28 15:27:41 crc kubenswrapper[4981]: E0128 15:27:41.819394 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f96c624-5794-4657-b6b9-00cccf2ac699" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.819439 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f96c624-5794-4657-b6b9-00cccf2ac699" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.819780 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f96c624-5794-4657-b6b9-00cccf2ac699" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.820653 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.824621 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.824664 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.824670 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.824677 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.855782 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x"] Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.865849 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.865979 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.866050 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.866093 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7gtk\" (UniqueName: \"kubernetes.io/projected/c8005a06-6ceb-4918-867e-216081419a3a-kube-api-access-m7gtk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.968385 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.968475 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.968529 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.968555 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7gtk\" (UniqueName: \"kubernetes.io/projected/c8005a06-6ceb-4918-867e-216081419a3a-kube-api-access-m7gtk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.973636 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.976114 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.978717 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:27:41 crc kubenswrapper[4981]: I0128 15:27:41.994907 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7gtk\" (UniqueName: \"kubernetes.io/projected/c8005a06-6ceb-4918-867e-216081419a3a-kube-api-access-m7gtk\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:27:42 crc kubenswrapper[4981]: I0128 15:27:42.148993 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:27:42 crc kubenswrapper[4981]: I0128 15:27:42.726335 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x"] Jan 28 15:27:42 crc kubenswrapper[4981]: W0128 15:27:42.733652 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8005a06_6ceb_4918_867e_216081419a3a.slice/crio-66ff90ff3f25805ca4e17e3d91e04a9c3b7f4e970cde9db0dbc695b217fa3a74 WatchSource:0}: Error finding container 66ff90ff3f25805ca4e17e3d91e04a9c3b7f4e970cde9db0dbc695b217fa3a74: Status 404 returned error can't find the container with id 66ff90ff3f25805ca4e17e3d91e04a9c3b7f4e970cde9db0dbc695b217fa3a74 Jan 28 15:27:42 crc kubenswrapper[4981]: I0128 15:27:42.747029 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" event={"ID":"c8005a06-6ceb-4918-867e-216081419a3a","Type":"ContainerStarted","Data":"66ff90ff3f25805ca4e17e3d91e04a9c3b7f4e970cde9db0dbc695b217fa3a74"} Jan 28 15:27:43 crc kubenswrapper[4981]: I0128 15:27:43.759758 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" event={"ID":"c8005a06-6ceb-4918-867e-216081419a3a","Type":"ContainerStarted","Data":"af6545c59c100cde10c5717380d907b141f3f01f95aa6ece9990f0a58e62e85c"} Jan 28 15:27:49 crc kubenswrapper[4981]: I0128 15:27:49.897945 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:27:49 crc kubenswrapper[4981]: I0128 15:27:49.898669 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:27:49 crc kubenswrapper[4981]: I0128 15:27:49.898760 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:27:49 crc kubenswrapper[4981]: I0128 15:27:49.899761 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"56ba4ad0b5731e840644f4808ebff65356aa66806974ca06b90bbbbf62b8740b"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:27:49 crc kubenswrapper[4981]: I0128 15:27:49.899860 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://56ba4ad0b5731e840644f4808ebff65356aa66806974ca06b90bbbbf62b8740b" gracePeriod=600 Jan 28 15:27:50 crc kubenswrapper[4981]: I0128 15:27:50.854728 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="56ba4ad0b5731e840644f4808ebff65356aa66806974ca06b90bbbbf62b8740b" exitCode=0 Jan 28 15:27:50 crc kubenswrapper[4981]: I0128 15:27:50.854779 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"56ba4ad0b5731e840644f4808ebff65356aa66806974ca06b90bbbbf62b8740b"} Jan 28 15:27:50 crc kubenswrapper[4981]: I0128 15:27:50.855671 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5"} Jan 28 15:27:50 crc kubenswrapper[4981]: I0128 15:27:50.855708 4981 scope.go:117] "RemoveContainer" containerID="af8ca17674da28747e2478538c5afcfef139a1d418b13a8e190cf49cebcd62c0" Jan 28 15:27:50 crc kubenswrapper[4981]: I0128 15:27:50.883989 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" podStartSLOduration=9.189066372 podStartE2EDuration="9.883968032s" podCreationTimestamp="2026-01-28 15:27:41 +0000 UTC" firstStartedPulling="2026-01-28 15:27:42.735847232 +0000 UTC m=+1474.188005523" lastFinishedPulling="2026-01-28 15:27:43.430748942 +0000 UTC m=+1474.882907183" observedRunningTime="2026-01-28 15:27:43.7760471 +0000 UTC m=+1475.228205341" watchObservedRunningTime="2026-01-28 15:27:50.883968032 +0000 UTC m=+1482.336126283" Jan 28 15:28:30 crc kubenswrapper[4981]: I0128 15:28:30.986249 4981 scope.go:117] "RemoveContainer" containerID="3971960b8f145b8779b46839ce085fbe4914bd40df499fe01532190534f6efba" Jan 28 15:29:28 crc kubenswrapper[4981]: I0128 15:29:28.469437 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-44mls"] Jan 28 15:29:28 crc kubenswrapper[4981]: I0128 15:29:28.471678 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:28 crc kubenswrapper[4981]: I0128 15:29:28.491067 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44mls"] Jan 28 15:29:28 crc kubenswrapper[4981]: I0128 15:29:28.557910 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16322121-71a8-48ac-9799-2752dcc68efe-utilities\") pod \"redhat-marketplace-44mls\" (UID: \"16322121-71a8-48ac-9799-2752dcc68efe\") " pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:28 crc kubenswrapper[4981]: I0128 15:29:28.558110 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16322121-71a8-48ac-9799-2752dcc68efe-catalog-content\") pod \"redhat-marketplace-44mls\" (UID: \"16322121-71a8-48ac-9799-2752dcc68efe\") " pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:28 crc kubenswrapper[4981]: I0128 15:29:28.558165 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn8gk\" (UniqueName: \"kubernetes.io/projected/16322121-71a8-48ac-9799-2752dcc68efe-kube-api-access-wn8gk\") pod \"redhat-marketplace-44mls\" (UID: \"16322121-71a8-48ac-9799-2752dcc68efe\") " pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:28 crc kubenswrapper[4981]: I0128 15:29:28.659543 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16322121-71a8-48ac-9799-2752dcc68efe-utilities\") pod \"redhat-marketplace-44mls\" (UID: \"16322121-71a8-48ac-9799-2752dcc68efe\") " pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:28 crc kubenswrapper[4981]: I0128 15:29:28.659614 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16322121-71a8-48ac-9799-2752dcc68efe-catalog-content\") pod \"redhat-marketplace-44mls\" (UID: \"16322121-71a8-48ac-9799-2752dcc68efe\") " pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:28 crc kubenswrapper[4981]: I0128 15:29:28.659636 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn8gk\" (UniqueName: \"kubernetes.io/projected/16322121-71a8-48ac-9799-2752dcc68efe-kube-api-access-wn8gk\") pod \"redhat-marketplace-44mls\" (UID: \"16322121-71a8-48ac-9799-2752dcc68efe\") " pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:28 crc kubenswrapper[4981]: I0128 15:29:28.660178 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16322121-71a8-48ac-9799-2752dcc68efe-utilities\") pod \"redhat-marketplace-44mls\" (UID: \"16322121-71a8-48ac-9799-2752dcc68efe\") " pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:28 crc kubenswrapper[4981]: I0128 15:29:28.660234 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16322121-71a8-48ac-9799-2752dcc68efe-catalog-content\") pod \"redhat-marketplace-44mls\" (UID: \"16322121-71a8-48ac-9799-2752dcc68efe\") " pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:28 crc kubenswrapper[4981]: I0128 15:29:28.694073 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn8gk\" (UniqueName: \"kubernetes.io/projected/16322121-71a8-48ac-9799-2752dcc68efe-kube-api-access-wn8gk\") pod \"redhat-marketplace-44mls\" (UID: \"16322121-71a8-48ac-9799-2752dcc68efe\") " pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:28 crc kubenswrapper[4981]: I0128 15:29:28.797392 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:29 crc kubenswrapper[4981]: I0128 15:29:29.248796 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-44mls"] Jan 28 15:29:30 crc kubenswrapper[4981]: I0128 15:29:30.022535 4981 generic.go:334] "Generic (PLEG): container finished" podID="16322121-71a8-48ac-9799-2752dcc68efe" containerID="a4a076f57511672f32054742b5ecb9d34856a0aac3ffdb4cbf6f8423c994c393" exitCode=0 Jan 28 15:29:30 crc kubenswrapper[4981]: I0128 15:29:30.022604 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44mls" event={"ID":"16322121-71a8-48ac-9799-2752dcc68efe","Type":"ContainerDied","Data":"a4a076f57511672f32054742b5ecb9d34856a0aac3ffdb4cbf6f8423c994c393"} Jan 28 15:29:30 crc kubenswrapper[4981]: I0128 15:29:30.023713 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44mls" event={"ID":"16322121-71a8-48ac-9799-2752dcc68efe","Type":"ContainerStarted","Data":"83206c0eb243b266b68e224929aed7f3ca014700f50786bd493b97f03158eb04"} Jan 28 15:29:31 crc kubenswrapper[4981]: I0128 15:29:31.052934 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44mls" event={"ID":"16322121-71a8-48ac-9799-2752dcc68efe","Type":"ContainerStarted","Data":"991e80d7e4e80556bf5702b6d6536add2ef70dfafe7074eeaf49904f5da73d0f"} Jan 28 15:29:31 crc kubenswrapper[4981]: I0128 15:29:31.092592 4981 scope.go:117] "RemoveContainer" containerID="0ece829d3192440ac9d2b29babaa974ff959b9ec8978ff8acf890debd584c85e" Jan 28 15:29:31 crc kubenswrapper[4981]: I0128 15:29:31.124306 4981 scope.go:117] "RemoveContainer" containerID="e68bed12f94650b5a4f28f2cf0b0843d1b1db74a65bfe612ab0e70ecfc6c99c8" Jan 28 15:29:31 crc kubenswrapper[4981]: I0128 15:29:31.144822 4981 scope.go:117] "RemoveContainer" containerID="38f60aa5b6b381aecfa0426a25c09e04cf54a8863492ccfb1fe5dd9fc529cf0d" Jan 28 15:29:32 crc kubenswrapper[4981]: I0128 15:29:32.069497 4981 generic.go:334] "Generic (PLEG): container finished" podID="16322121-71a8-48ac-9799-2752dcc68efe" containerID="991e80d7e4e80556bf5702b6d6536add2ef70dfafe7074eeaf49904f5da73d0f" exitCode=0 Jan 28 15:29:32 crc kubenswrapper[4981]: I0128 15:29:32.069779 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44mls" event={"ID":"16322121-71a8-48ac-9799-2752dcc68efe","Type":"ContainerDied","Data":"991e80d7e4e80556bf5702b6d6536add2ef70dfafe7074eeaf49904f5da73d0f"} Jan 28 15:29:33 crc kubenswrapper[4981]: I0128 15:29:33.085508 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44mls" event={"ID":"16322121-71a8-48ac-9799-2752dcc68efe","Type":"ContainerStarted","Data":"50c7a27897c30105fdb7637299bd2765781a4983b2ce7c9e785078085afa7e63"} Jan 28 15:29:33 crc kubenswrapper[4981]: I0128 15:29:33.109131 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-44mls" podStartSLOduration=2.606565002 podStartE2EDuration="5.109113411s" podCreationTimestamp="2026-01-28 15:29:28 +0000 UTC" firstStartedPulling="2026-01-28 15:29:30.026180751 +0000 UTC m=+1581.478339022" lastFinishedPulling="2026-01-28 15:29:32.52872919 +0000 UTC m=+1583.980887431" observedRunningTime="2026-01-28 15:29:33.104873741 +0000 UTC m=+1584.557031992" watchObservedRunningTime="2026-01-28 15:29:33.109113411 +0000 UTC m=+1584.561271672" Jan 28 15:29:38 crc kubenswrapper[4981]: I0128 15:29:38.798438 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:38 crc kubenswrapper[4981]: I0128 15:29:38.798796 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:38 crc kubenswrapper[4981]: I0128 15:29:38.859578 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:39 crc kubenswrapper[4981]: I0128 15:29:39.224355 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:39 crc kubenswrapper[4981]: I0128 15:29:39.279716 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-44mls"] Jan 28 15:29:41 crc kubenswrapper[4981]: I0128 15:29:41.175461 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-44mls" podUID="16322121-71a8-48ac-9799-2752dcc68efe" containerName="registry-server" containerID="cri-o://50c7a27897c30105fdb7637299bd2765781a4983b2ce7c9e785078085afa7e63" gracePeriod=2 Jan 28 15:29:42 crc kubenswrapper[4981]: I0128 15:29:42.194947 4981 generic.go:334] "Generic (PLEG): container finished" podID="16322121-71a8-48ac-9799-2752dcc68efe" containerID="50c7a27897c30105fdb7637299bd2765781a4983b2ce7c9e785078085afa7e63" exitCode=0 Jan 28 15:29:42 crc kubenswrapper[4981]: I0128 15:29:42.195030 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44mls" event={"ID":"16322121-71a8-48ac-9799-2752dcc68efe","Type":"ContainerDied","Data":"50c7a27897c30105fdb7637299bd2765781a4983b2ce7c9e785078085afa7e63"} Jan 28 15:29:42 crc kubenswrapper[4981]: I0128 15:29:42.195682 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-44mls" event={"ID":"16322121-71a8-48ac-9799-2752dcc68efe","Type":"ContainerDied","Data":"83206c0eb243b266b68e224929aed7f3ca014700f50786bd493b97f03158eb04"} Jan 28 15:29:42 crc kubenswrapper[4981]: I0128 15:29:42.195732 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83206c0eb243b266b68e224929aed7f3ca014700f50786bd493b97f03158eb04" Jan 28 15:29:42 crc kubenswrapper[4981]: I0128 15:29:42.232744 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:42 crc kubenswrapper[4981]: I0128 15:29:42.312154 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wn8gk\" (UniqueName: \"kubernetes.io/projected/16322121-71a8-48ac-9799-2752dcc68efe-kube-api-access-wn8gk\") pod \"16322121-71a8-48ac-9799-2752dcc68efe\" (UID: \"16322121-71a8-48ac-9799-2752dcc68efe\") " Jan 28 15:29:42 crc kubenswrapper[4981]: I0128 15:29:42.312255 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16322121-71a8-48ac-9799-2752dcc68efe-utilities\") pod \"16322121-71a8-48ac-9799-2752dcc68efe\" (UID: \"16322121-71a8-48ac-9799-2752dcc68efe\") " Jan 28 15:29:42 crc kubenswrapper[4981]: I0128 15:29:42.312558 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16322121-71a8-48ac-9799-2752dcc68efe-catalog-content\") pod \"16322121-71a8-48ac-9799-2752dcc68efe\" (UID: \"16322121-71a8-48ac-9799-2752dcc68efe\") " Jan 28 15:29:42 crc kubenswrapper[4981]: I0128 15:29:42.313564 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16322121-71a8-48ac-9799-2752dcc68efe-utilities" (OuterVolumeSpecName: "utilities") pod "16322121-71a8-48ac-9799-2752dcc68efe" (UID: "16322121-71a8-48ac-9799-2752dcc68efe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:29:42 crc kubenswrapper[4981]: I0128 15:29:42.319211 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16322121-71a8-48ac-9799-2752dcc68efe-kube-api-access-wn8gk" (OuterVolumeSpecName: "kube-api-access-wn8gk") pod "16322121-71a8-48ac-9799-2752dcc68efe" (UID: "16322121-71a8-48ac-9799-2752dcc68efe"). InnerVolumeSpecName "kube-api-access-wn8gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:29:42 crc kubenswrapper[4981]: I0128 15:29:42.333259 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16322121-71a8-48ac-9799-2752dcc68efe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16322121-71a8-48ac-9799-2752dcc68efe" (UID: "16322121-71a8-48ac-9799-2752dcc68efe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:29:42 crc kubenswrapper[4981]: I0128 15:29:42.414458 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16322121-71a8-48ac-9799-2752dcc68efe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:42 crc kubenswrapper[4981]: I0128 15:29:42.414497 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wn8gk\" (UniqueName: \"kubernetes.io/projected/16322121-71a8-48ac-9799-2752dcc68efe-kube-api-access-wn8gk\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:42 crc kubenswrapper[4981]: I0128 15:29:42.414515 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16322121-71a8-48ac-9799-2752dcc68efe-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:29:43 crc kubenswrapper[4981]: I0128 15:29:43.207564 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-44mls" Jan 28 15:29:43 crc kubenswrapper[4981]: I0128 15:29:43.268257 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-44mls"] Jan 28 15:29:43 crc kubenswrapper[4981]: I0128 15:29:43.285495 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-44mls"] Jan 28 15:29:43 crc kubenswrapper[4981]: I0128 15:29:43.332770 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16322121-71a8-48ac-9799-2752dcc68efe" path="/var/lib/kubelet/pods/16322121-71a8-48ac-9799-2752dcc68efe/volumes" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.187943 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6"] Jan 28 15:30:00 crc kubenswrapper[4981]: E0128 15:30:00.188978 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16322121-71a8-48ac-9799-2752dcc68efe" containerName="registry-server" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.188995 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="16322121-71a8-48ac-9799-2752dcc68efe" containerName="registry-server" Jan 28 15:30:00 crc kubenswrapper[4981]: E0128 15:30:00.189043 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16322121-71a8-48ac-9799-2752dcc68efe" containerName="extract-utilities" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.189052 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="16322121-71a8-48ac-9799-2752dcc68efe" containerName="extract-utilities" Jan 28 15:30:00 crc kubenswrapper[4981]: E0128 15:30:00.189075 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16322121-71a8-48ac-9799-2752dcc68efe" containerName="extract-content" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.189083 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="16322121-71a8-48ac-9799-2752dcc68efe" containerName="extract-content" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.189399 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="16322121-71a8-48ac-9799-2752dcc68efe" containerName="registry-server" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.190353 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.194030 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.197061 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.206514 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6"] Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.309141 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-secret-volume\") pod \"collect-profiles-29493570-lm6z6\" (UID: \"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.309730 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-config-volume\") pod \"collect-profiles-29493570-lm6z6\" (UID: \"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.309862 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bvld\" (UniqueName: \"kubernetes.io/projected/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-kube-api-access-7bvld\") pod \"collect-profiles-29493570-lm6z6\" (UID: \"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.411824 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-config-volume\") pod \"collect-profiles-29493570-lm6z6\" (UID: \"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.411889 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bvld\" (UniqueName: \"kubernetes.io/projected/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-kube-api-access-7bvld\") pod \"collect-profiles-29493570-lm6z6\" (UID: \"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.412007 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-secret-volume\") pod \"collect-profiles-29493570-lm6z6\" (UID: \"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.416289 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-config-volume\") pod \"collect-profiles-29493570-lm6z6\" (UID: \"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.430997 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-secret-volume\") pod \"collect-profiles-29493570-lm6z6\" (UID: \"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.436752 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bvld\" (UniqueName: \"kubernetes.io/projected/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-kube-api-access-7bvld\") pod \"collect-profiles-29493570-lm6z6\" (UID: \"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.532533 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" Jan 28 15:30:00 crc kubenswrapper[4981]: I0128 15:30:00.997165 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6"] Jan 28 15:30:01 crc kubenswrapper[4981]: I0128 15:30:01.439353 4981 generic.go:334] "Generic (PLEG): container finished" podID="b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b" containerID="33cf86988a93e27a839eca4398708a38c1afe30d8ab65c64dad17c7d1099bf8b" exitCode=0 Jan 28 15:30:01 crc kubenswrapper[4981]: I0128 15:30:01.439533 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" event={"ID":"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b","Type":"ContainerDied","Data":"33cf86988a93e27a839eca4398708a38c1afe30d8ab65c64dad17c7d1099bf8b"} Jan 28 15:30:01 crc kubenswrapper[4981]: I0128 15:30:01.439798 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" event={"ID":"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b","Type":"ContainerStarted","Data":"49e0be37fc053be83cea581bde2b10de3266f2bfc8c5d59da1b001218968d33f"} Jan 28 15:30:02 crc kubenswrapper[4981]: I0128 15:30:02.874243 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" Jan 28 15:30:02 crc kubenswrapper[4981]: I0128 15:30:02.972373 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-secret-volume\") pod \"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b\" (UID: \"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b\") " Jan 28 15:30:02 crc kubenswrapper[4981]: I0128 15:30:02.972477 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-config-volume\") pod \"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b\" (UID: \"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b\") " Jan 28 15:30:02 crc kubenswrapper[4981]: I0128 15:30:02.972659 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bvld\" (UniqueName: \"kubernetes.io/projected/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-kube-api-access-7bvld\") pod \"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b\" (UID: \"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b\") " Jan 28 15:30:02 crc kubenswrapper[4981]: I0128 15:30:02.973495 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-config-volume" (OuterVolumeSpecName: "config-volume") pod "b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b" (UID: "b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:30:02 crc kubenswrapper[4981]: I0128 15:30:02.977745 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-kube-api-access-7bvld" (OuterVolumeSpecName: "kube-api-access-7bvld") pod "b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b" (UID: "b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b"). InnerVolumeSpecName "kube-api-access-7bvld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:30:02 crc kubenswrapper[4981]: I0128 15:30:02.977812 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b" (UID: "b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:30:03 crc kubenswrapper[4981]: I0128 15:30:03.075414 4981 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:03 crc kubenswrapper[4981]: I0128 15:30:03.075444 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bvld\" (UniqueName: \"kubernetes.io/projected/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-kube-api-access-7bvld\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:03 crc kubenswrapper[4981]: I0128 15:30:03.075455 4981 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:30:03 crc kubenswrapper[4981]: I0128 15:30:03.462113 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" event={"ID":"b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b","Type":"ContainerDied","Data":"49e0be37fc053be83cea581bde2b10de3266f2bfc8c5d59da1b001218968d33f"} Jan 28 15:30:03 crc kubenswrapper[4981]: I0128 15:30:03.462152 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49e0be37fc053be83cea581bde2b10de3266f2bfc8c5d59da1b001218968d33f" Jan 28 15:30:03 crc kubenswrapper[4981]: I0128 15:30:03.462164 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6" Jan 28 15:30:19 crc kubenswrapper[4981]: I0128 15:30:19.898246 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:30:19 crc kubenswrapper[4981]: I0128 15:30:19.900060 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:30:31 crc kubenswrapper[4981]: I0128 15:30:31.188950 4981 scope.go:117] "RemoveContainer" containerID="b2c594831f6d498d299cde1375c09c2d1c77a97e84244c84afdae6e480592713" Jan 28 15:30:49 crc kubenswrapper[4981]: I0128 15:30:49.898138 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:30:49 crc kubenswrapper[4981]: I0128 15:30:49.898829 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:31:02 crc kubenswrapper[4981]: I0128 15:31:02.135712 4981 generic.go:334] "Generic (PLEG): container finished" podID="c8005a06-6ceb-4918-867e-216081419a3a" containerID="af6545c59c100cde10c5717380d907b141f3f01f95aa6ece9990f0a58e62e85c" exitCode=0 Jan 28 15:31:02 crc kubenswrapper[4981]: I0128 15:31:02.135795 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" event={"ID":"c8005a06-6ceb-4918-867e-216081419a3a","Type":"ContainerDied","Data":"af6545c59c100cde10c5717380d907b141f3f01f95aa6ece9990f0a58e62e85c"} Jan 28 15:31:03 crc kubenswrapper[4981]: I0128 15:31:03.733463 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:31:03 crc kubenswrapper[4981]: I0128 15:31:03.849056 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7gtk\" (UniqueName: \"kubernetes.io/projected/c8005a06-6ceb-4918-867e-216081419a3a-kube-api-access-m7gtk\") pod \"c8005a06-6ceb-4918-867e-216081419a3a\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " Jan 28 15:31:03 crc kubenswrapper[4981]: I0128 15:31:03.849242 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-inventory\") pod \"c8005a06-6ceb-4918-867e-216081419a3a\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " Jan 28 15:31:03 crc kubenswrapper[4981]: I0128 15:31:03.849274 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-bootstrap-combined-ca-bundle\") pod \"c8005a06-6ceb-4918-867e-216081419a3a\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " Jan 28 15:31:03 crc kubenswrapper[4981]: I0128 15:31:03.849291 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-ssh-key-openstack-edpm-ipam\") pod \"c8005a06-6ceb-4918-867e-216081419a3a\" (UID: \"c8005a06-6ceb-4918-867e-216081419a3a\") " Jan 28 15:31:03 crc kubenswrapper[4981]: I0128 15:31:03.855111 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8005a06-6ceb-4918-867e-216081419a3a-kube-api-access-m7gtk" (OuterVolumeSpecName: "kube-api-access-m7gtk") pod "c8005a06-6ceb-4918-867e-216081419a3a" (UID: "c8005a06-6ceb-4918-867e-216081419a3a"). InnerVolumeSpecName "kube-api-access-m7gtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:31:03 crc kubenswrapper[4981]: I0128 15:31:03.855121 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "c8005a06-6ceb-4918-867e-216081419a3a" (UID: "c8005a06-6ceb-4918-867e-216081419a3a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:03 crc kubenswrapper[4981]: I0128 15:31:03.875865 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-inventory" (OuterVolumeSpecName: "inventory") pod "c8005a06-6ceb-4918-867e-216081419a3a" (UID: "c8005a06-6ceb-4918-867e-216081419a3a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:03 crc kubenswrapper[4981]: I0128 15:31:03.881567 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c8005a06-6ceb-4918-867e-216081419a3a" (UID: "c8005a06-6ceb-4918-867e-216081419a3a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:31:03 crc kubenswrapper[4981]: I0128 15:31:03.951472 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7gtk\" (UniqueName: \"kubernetes.io/projected/c8005a06-6ceb-4918-867e-216081419a3a-kube-api-access-m7gtk\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:03 crc kubenswrapper[4981]: I0128 15:31:03.951510 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:03 crc kubenswrapper[4981]: I0128 15:31:03.951525 4981 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:03 crc kubenswrapper[4981]: I0128 15:31:03.951538 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c8005a06-6ceb-4918-867e-216081419a3a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.162970 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" event={"ID":"c8005a06-6ceb-4918-867e-216081419a3a","Type":"ContainerDied","Data":"66ff90ff3f25805ca4e17e3d91e04a9c3b7f4e970cde9db0dbc695b217fa3a74"} Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.163024 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66ff90ff3f25805ca4e17e3d91e04a9c3b7f4e970cde9db0dbc695b217fa3a74" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.163070 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.277377 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7"] Jan 28 15:31:04 crc kubenswrapper[4981]: E0128 15:31:04.277747 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b" containerName="collect-profiles" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.277764 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b" containerName="collect-profiles" Jan 28 15:31:04 crc kubenswrapper[4981]: E0128 15:31:04.277784 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8005a06-6ceb-4918-867e-216081419a3a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.277791 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8005a06-6ceb-4918-867e-216081419a3a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.277984 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b" containerName="collect-profiles" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.278030 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8005a06-6ceb-4918-867e-216081419a3a" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.278633 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.281510 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.282033 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.282092 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.282660 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.283671 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7"] Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.358799 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7\" (UID: \"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.358874 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jcpb\" (UniqueName: \"kubernetes.io/projected/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-kube-api-access-7jcpb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7\" (UID: \"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.358916 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7\" (UID: \"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.460486 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7\" (UID: \"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.460530 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jcpb\" (UniqueName: \"kubernetes.io/projected/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-kube-api-access-7jcpb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7\" (UID: \"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.460565 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7\" (UID: \"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.465835 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7\" (UID: \"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.466532 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7\" (UID: \"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.477983 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jcpb\" (UniqueName: \"kubernetes.io/projected/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-kube-api-access-7jcpb\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7\" (UID: \"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" Jan 28 15:31:04 crc kubenswrapper[4981]: I0128 15:31:04.601864 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" Jan 28 15:31:05 crc kubenswrapper[4981]: I0128 15:31:05.166227 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7"] Jan 28 15:31:05 crc kubenswrapper[4981]: W0128 15:31:05.182501 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf65e4d3b_6e8a_42cf_8c86_b6f50a9d4628.slice/crio-acf1651072ad22dbcecf3e8b4598aba67f0d0c16a2162bb0e13493247e1a6d7a WatchSource:0}: Error finding container acf1651072ad22dbcecf3e8b4598aba67f0d0c16a2162bb0e13493247e1a6d7a: Status 404 returned error can't find the container with id acf1651072ad22dbcecf3e8b4598aba67f0d0c16a2162bb0e13493247e1a6d7a Jan 28 15:31:05 crc kubenswrapper[4981]: I0128 15:31:05.186500 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:31:06 crc kubenswrapper[4981]: I0128 15:31:06.217945 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" event={"ID":"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628","Type":"ContainerStarted","Data":"61ebc6cdbdb56e2807f5cb628118733025b5b841ca6d69e7aa7cc566d81f0d18"} Jan 28 15:31:06 crc kubenswrapper[4981]: I0128 15:31:06.218319 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" event={"ID":"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628","Type":"ContainerStarted","Data":"acf1651072ad22dbcecf3e8b4598aba67f0d0c16a2162bb0e13493247e1a6d7a"} Jan 28 15:31:06 crc kubenswrapper[4981]: I0128 15:31:06.248457 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" podStartSLOduration=1.5240089430000001 podStartE2EDuration="2.248430734s" podCreationTimestamp="2026-01-28 15:31:04 +0000 UTC" firstStartedPulling="2026-01-28 15:31:05.186206304 +0000 UTC m=+1676.638364555" lastFinishedPulling="2026-01-28 15:31:05.910628105 +0000 UTC m=+1677.362786346" observedRunningTime="2026-01-28 15:31:06.235877217 +0000 UTC m=+1677.688035498" watchObservedRunningTime="2026-01-28 15:31:06.248430734 +0000 UTC m=+1677.700589015" Jan 28 15:31:07 crc kubenswrapper[4981]: I0128 15:31:07.050653 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c8be-account-create-update-8gx8g"] Jan 28 15:31:07 crc kubenswrapper[4981]: I0128 15:31:07.058081 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-r2xkn"] Jan 28 15:31:07 crc kubenswrapper[4981]: I0128 15:31:07.065853 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-r2xkn"] Jan 28 15:31:07 crc kubenswrapper[4981]: I0128 15:31:07.073728 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c8be-account-create-update-8gx8g"] Jan 28 15:31:07 crc kubenswrapper[4981]: I0128 15:31:07.333726 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0329c13d-bd93-45a8-82a3-b990aa22da35" path="/var/lib/kubelet/pods/0329c13d-bd93-45a8-82a3-b990aa22da35/volumes" Jan 28 15:31:07 crc kubenswrapper[4981]: I0128 15:31:07.335655 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8912af28-37cb-4f57-b318-9e3724b13213" path="/var/lib/kubelet/pods/8912af28-37cb-4f57-b318-9e3724b13213/volumes" Jan 28 15:31:08 crc kubenswrapper[4981]: I0128 15:31:08.037273 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-vxd4q"] Jan 28 15:31:08 crc kubenswrapper[4981]: I0128 15:31:08.050988 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-893e-account-create-update-tjqh9"] Jan 28 15:31:08 crc kubenswrapper[4981]: I0128 15:31:08.059917 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-vxd4q"] Jan 28 15:31:08 crc kubenswrapper[4981]: I0128 15:31:08.067059 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-893e-account-create-update-tjqh9"] Jan 28 15:31:09 crc kubenswrapper[4981]: I0128 15:31:09.340662 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6fd85fb-5a63-4f04-8c99-c03167e5e4a9" path="/var/lib/kubelet/pods/c6fd85fb-5a63-4f04-8c99-c03167e5e4a9/volumes" Jan 28 15:31:09 crc kubenswrapper[4981]: I0128 15:31:09.341945 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cea78484-cb73-4b6c-bf1f-36e44fcb7cf4" path="/var/lib/kubelet/pods/cea78484-cb73-4b6c-bf1f-36e44fcb7cf4/volumes" Jan 28 15:31:12 crc kubenswrapper[4981]: I0128 15:31:12.037571 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9e9b-account-create-update-78cpp"] Jan 28 15:31:12 crc kubenswrapper[4981]: I0128 15:31:12.048373 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9e9b-account-create-update-78cpp"] Jan 28 15:31:13 crc kubenswrapper[4981]: I0128 15:31:13.046863 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-dzkv7"] Jan 28 15:31:13 crc kubenswrapper[4981]: I0128 15:31:13.062291 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-dzkv7"] Jan 28 15:31:13 crc kubenswrapper[4981]: I0128 15:31:13.334256 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88e9ea9c-6f3b-425a-bcb0-39d66e4040ee" path="/var/lib/kubelet/pods/88e9ea9c-6f3b-425a-bcb0-39d66e4040ee/volumes" Jan 28 15:31:13 crc kubenswrapper[4981]: I0128 15:31:13.335798 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44" path="/var/lib/kubelet/pods/b3d8dc6a-4389-4a69-bc45-2b3ab5ed1f44/volumes" Jan 28 15:31:15 crc kubenswrapper[4981]: I0128 15:31:15.030695 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-58dz7"] Jan 28 15:31:15 crc kubenswrapper[4981]: I0128 15:31:15.040615 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-58dz7"] Jan 28 15:31:15 crc kubenswrapper[4981]: I0128 15:31:15.332422 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dad862f8-3d07-4a51-81a2-c6d7226f8ba1" path="/var/lib/kubelet/pods/dad862f8-3d07-4a51-81a2-c6d7226f8ba1/volumes" Jan 28 15:31:19 crc kubenswrapper[4981]: I0128 15:31:19.897338 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:31:19 crc kubenswrapper[4981]: I0128 15:31:19.897726 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:31:19 crc kubenswrapper[4981]: I0128 15:31:19.897791 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:31:19 crc kubenswrapper[4981]: I0128 15:31:19.898774 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:31:19 crc kubenswrapper[4981]: I0128 15:31:19.898865 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" gracePeriod=600 Jan 28 15:31:20 crc kubenswrapper[4981]: E0128 15:31:20.028135 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:31:20 crc kubenswrapper[4981]: I0128 15:31:20.380666 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" exitCode=0 Jan 28 15:31:20 crc kubenswrapper[4981]: I0128 15:31:20.380716 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5"} Jan 28 15:31:20 crc kubenswrapper[4981]: I0128 15:31:20.381089 4981 scope.go:117] "RemoveContainer" containerID="56ba4ad0b5731e840644f4808ebff65356aa66806974ca06b90bbbbf62b8740b" Jan 28 15:31:20 crc kubenswrapper[4981]: I0128 15:31:20.381697 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:31:20 crc kubenswrapper[4981]: E0128 15:31:20.382027 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:31:31 crc kubenswrapper[4981]: I0128 15:31:31.256478 4981 scope.go:117] "RemoveContainer" containerID="9d0d7f2c595ed34b6b746b5b2a439c9cebe1ed66d245adb0eb82d732491664d4" Jan 28 15:31:31 crc kubenswrapper[4981]: I0128 15:31:31.284884 4981 scope.go:117] "RemoveContainer" containerID="5f6228c1dede5551735def8d5ecedadaab7001c13664fe6dc26cbf12d0dc2b14" Jan 28 15:31:31 crc kubenswrapper[4981]: I0128 15:31:31.369581 4981 scope.go:117] "RemoveContainer" containerID="f6f6c593ee94516385a1845445ae913d0e788745620c2357b1ef2fd9813f422e" Jan 28 15:31:31 crc kubenswrapper[4981]: I0128 15:31:31.397309 4981 scope.go:117] "RemoveContainer" containerID="a52085f7bca4e361d587761cb7c7e82631dac1c8cf69167d22887f4f91c3b846" Jan 28 15:31:31 crc kubenswrapper[4981]: I0128 15:31:31.423319 4981 scope.go:117] "RemoveContainer" containerID="ace9fb9ce756a588c3c6affd7a1ce90201333b645faaccb74cbb6a00176e19a6" Jan 28 15:31:31 crc kubenswrapper[4981]: I0128 15:31:31.465134 4981 scope.go:117] "RemoveContainer" containerID="ef25cfad4afdd200887a4c460764516f9bfc87dbcc228f662344596b5b6db633" Jan 28 15:31:31 crc kubenswrapper[4981]: I0128 15:31:31.512105 4981 scope.go:117] "RemoveContainer" containerID="b485a9f4c2b1128c44db139fb8d9f39300c751028eb9da0b518c5ae67762d4f5" Jan 28 15:31:31 crc kubenswrapper[4981]: I0128 15:31:31.554698 4981 scope.go:117] "RemoveContainer" containerID="60e1f8071aafadaa0edf6f9e60bad757dcd9ac844596ea221be2f289a300ff94" Jan 28 15:31:31 crc kubenswrapper[4981]: I0128 15:31:31.583292 4981 scope.go:117] "RemoveContainer" containerID="675f7f4a4d21f1f0b9da7a0d4057ea65b90881e3c1c41cf359ddfee8480ae1c1" Jan 28 15:31:31 crc kubenswrapper[4981]: I0128 15:31:31.638414 4981 scope.go:117] "RemoveContainer" containerID="b5fd1c0a06956b34d63d1144a2bdfebee439cc02aba1c6c1a997689785770a8f" Jan 28 15:31:32 crc kubenswrapper[4981]: I0128 15:31:32.318922 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:31:32 crc kubenswrapper[4981]: E0128 15:31:32.319648 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:31:43 crc kubenswrapper[4981]: I0128 15:31:43.064054 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-57b4-account-create-update-vmrr8"] Jan 28 15:31:43 crc kubenswrapper[4981]: I0128 15:31:43.074289 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-h227q"] Jan 28 15:31:43 crc kubenswrapper[4981]: I0128 15:31:43.085729 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-57b4-account-create-update-vmrr8"] Jan 28 15:31:43 crc kubenswrapper[4981]: I0128 15:31:43.096682 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-h227q"] Jan 28 15:31:43 crc kubenswrapper[4981]: I0128 15:31:43.333929 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fa484df-952c-4be2-9edc-b8118029bf2e" path="/var/lib/kubelet/pods/7fa484df-952c-4be2-9edc-b8118029bf2e/volumes" Jan 28 15:31:43 crc kubenswrapper[4981]: I0128 15:31:43.335452 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86753537-ca73-42b1-900c-89a238d6bd4e" path="/var/lib/kubelet/pods/86753537-ca73-42b1-900c-89a238d6bd4e/volumes" Jan 28 15:31:44 crc kubenswrapper[4981]: I0128 15:31:44.046776 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-84pn6"] Jan 28 15:31:44 crc kubenswrapper[4981]: I0128 15:31:44.057779 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-f34f-account-create-update-vdhpg"] Jan 28 15:31:44 crc kubenswrapper[4981]: I0128 15:31:44.066310 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-c073-account-create-update-zgk9m"] Jan 28 15:31:44 crc kubenswrapper[4981]: I0128 15:31:44.073750 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-9ngsk"] Jan 28 15:31:44 crc kubenswrapper[4981]: I0128 15:31:44.081688 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-84pn6"] Jan 28 15:31:44 crc kubenswrapper[4981]: I0128 15:31:44.098687 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-f34f-account-create-update-vdhpg"] Jan 28 15:31:44 crc kubenswrapper[4981]: I0128 15:31:44.107880 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-9ngsk"] Jan 28 15:31:44 crc kubenswrapper[4981]: I0128 15:31:44.115172 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-c073-account-create-update-zgk9m"] Jan 28 15:31:45 crc kubenswrapper[4981]: I0128 15:31:45.338778 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="876c9310-5038-4cad-a381-b7d06ecd9fef" path="/var/lib/kubelet/pods/876c9310-5038-4cad-a381-b7d06ecd9fef/volumes" Jan 28 15:31:45 crc kubenswrapper[4981]: I0128 15:31:45.340483 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cff680cf-ec0c-41e0-82bf-4d59bbac0238" path="/var/lib/kubelet/pods/cff680cf-ec0c-41e0-82bf-4d59bbac0238/volumes" Jan 28 15:31:45 crc kubenswrapper[4981]: I0128 15:31:45.341697 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d90db1e4-5229-4543-9037-76dd0f5063eb" path="/var/lib/kubelet/pods/d90db1e4-5229-4543-9037-76dd0f5063eb/volumes" Jan 28 15:31:45 crc kubenswrapper[4981]: I0128 15:31:45.343133 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3" path="/var/lib/kubelet/pods/dc7e359d-b4a8-44e6-9b54-9d1128e6f6b3/volumes" Jan 28 15:31:46 crc kubenswrapper[4981]: I0128 15:31:46.318966 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:31:46 crc kubenswrapper[4981]: E0128 15:31:46.319247 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:31:49 crc kubenswrapper[4981]: I0128 15:31:49.070384 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-r876j"] Jan 28 15:31:49 crc kubenswrapper[4981]: I0128 15:31:49.085964 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6qpmd"] Jan 28 15:31:49 crc kubenswrapper[4981]: I0128 15:31:49.094913 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-r876j"] Jan 28 15:31:49 crc kubenswrapper[4981]: I0128 15:31:49.136661 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6qpmd"] Jan 28 15:31:49 crc kubenswrapper[4981]: I0128 15:31:49.335938 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cbc9552-255a-40df-ab3c-aed79b3c0b5c" path="/var/lib/kubelet/pods/6cbc9552-255a-40df-ab3c-aed79b3c0b5c/volumes" Jan 28 15:31:49 crc kubenswrapper[4981]: I0128 15:31:49.337183 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5f8f119-df95-4eef-979b-ae7d2cd54f00" path="/var/lib/kubelet/pods/f5f8f119-df95-4eef-979b-ae7d2cd54f00/volumes" Jan 28 15:31:59 crc kubenswrapper[4981]: I0128 15:31:59.330524 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:31:59 crc kubenswrapper[4981]: E0128 15:31:59.331906 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:32:12 crc kubenswrapper[4981]: I0128 15:32:12.318963 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:32:12 crc kubenswrapper[4981]: E0128 15:32:12.319754 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:32:26 crc kubenswrapper[4981]: I0128 15:32:26.318991 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:32:26 crc kubenswrapper[4981]: E0128 15:32:26.319938 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:32:31 crc kubenswrapper[4981]: I0128 15:32:31.838002 4981 scope.go:117] "RemoveContainer" containerID="8b98d6be7d196c2d777400d70fa95ca66495c80121ea2487450f2cac935d8e2c" Jan 28 15:32:31 crc kubenswrapper[4981]: I0128 15:32:31.877320 4981 scope.go:117] "RemoveContainer" containerID="c3b1e4c28c6434de5d292a70e72d25fd1667c6a5b92b0ad332eaa2239cefd236" Jan 28 15:32:31 crc kubenswrapper[4981]: I0128 15:32:31.936747 4981 scope.go:117] "RemoveContainer" containerID="b30e7391f856211a243b6e4e3a702379b89b851ee1790f8c587638b4c77ebc1e" Jan 28 15:32:32 crc kubenswrapper[4981]: I0128 15:32:32.017664 4981 scope.go:117] "RemoveContainer" containerID="417fe418f54d8719bba5c35178614922cc1d37cd590b1d7810f18f3052216b2e" Jan 28 15:32:32 crc kubenswrapper[4981]: I0128 15:32:32.042367 4981 scope.go:117] "RemoveContainer" containerID="ea739a09197cb87415b3ee3fcc9fb6db4e7cb3d9b08f58d8e348919fe05ae972" Jan 28 15:32:32 crc kubenswrapper[4981]: I0128 15:32:32.087424 4981 scope.go:117] "RemoveContainer" containerID="8817e309f229f5f8e99f9814c9af19a5b4b5b7b7531a53d19ef9d263b00ac3fd" Jan 28 15:32:32 crc kubenswrapper[4981]: I0128 15:32:32.124414 4981 scope.go:117] "RemoveContainer" containerID="d9060a931b6c8b9edc866fa50248457b3160fe5ef58caf13abe68b93862aee10" Jan 28 15:32:32 crc kubenswrapper[4981]: I0128 15:32:32.159041 4981 scope.go:117] "RemoveContainer" containerID="3b2f5bf4e7f9afb32a385fcfd420dea99f5fbf615dd31d49bedae1048471795f" Jan 28 15:32:36 crc kubenswrapper[4981]: I0128 15:32:36.057523 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5mpwx"] Jan 28 15:32:36 crc kubenswrapper[4981]: I0128 15:32:36.077340 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5mpwx"] Jan 28 15:32:37 crc kubenswrapper[4981]: I0128 15:32:37.050892 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-jd49r"] Jan 28 15:32:37 crc kubenswrapper[4981]: I0128 15:32:37.064827 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-jd49r"] Jan 28 15:32:37 crc kubenswrapper[4981]: I0128 15:32:37.340312 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82101008-6112-4a68-8776-7a2c896b5eab" path="/var/lib/kubelet/pods/82101008-6112-4a68-8776-7a2c896b5eab/volumes" Jan 28 15:32:37 crc kubenswrapper[4981]: I0128 15:32:37.341557 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca96a8b3-3c53-43ed-bd39-f0bb55a04250" path="/var/lib/kubelet/pods/ca96a8b3-3c53-43ed-bd39-f0bb55a04250/volumes" Jan 28 15:32:40 crc kubenswrapper[4981]: I0128 15:32:40.319260 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:32:40 crc kubenswrapper[4981]: E0128 15:32:40.320505 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:32:45 crc kubenswrapper[4981]: I0128 15:32:45.406513 4981 generic.go:334] "Generic (PLEG): container finished" podID="f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628" containerID="61ebc6cdbdb56e2807f5cb628118733025b5b841ca6d69e7aa7cc566d81f0d18" exitCode=0 Jan 28 15:32:45 crc kubenswrapper[4981]: I0128 15:32:45.406598 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" event={"ID":"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628","Type":"ContainerDied","Data":"61ebc6cdbdb56e2807f5cb628118733025b5b841ca6d69e7aa7cc566d81f0d18"} Jan 28 15:32:46 crc kubenswrapper[4981]: I0128 15:32:46.903495 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.032985 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-ssh-key-openstack-edpm-ipam\") pod \"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628\" (UID: \"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628\") " Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.033059 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jcpb\" (UniqueName: \"kubernetes.io/projected/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-kube-api-access-7jcpb\") pod \"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628\" (UID: \"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628\") " Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.033149 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-inventory\") pod \"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628\" (UID: \"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628\") " Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.038161 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-kube-api-access-7jcpb" (OuterVolumeSpecName: "kube-api-access-7jcpb") pod "f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628" (UID: "f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628"). InnerVolumeSpecName "kube-api-access-7jcpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.058487 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628" (UID: "f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.062000 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-inventory" (OuterVolumeSpecName: "inventory") pod "f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628" (UID: "f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.135550 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.135591 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.135608 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jcpb\" (UniqueName: \"kubernetes.io/projected/f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628-kube-api-access-7jcpb\") on node \"crc\" DevicePath \"\"" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.426868 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" event={"ID":"f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628","Type":"ContainerDied","Data":"acf1651072ad22dbcecf3e8b4598aba67f0d0c16a2162bb0e13493247e1a6d7a"} Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.426906 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acf1651072ad22dbcecf3e8b4598aba67f0d0c16a2162bb0e13493247e1a6d7a" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.426910 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.550536 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p"] Jan 28 15:32:47 crc kubenswrapper[4981]: E0128 15:32:47.550928 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.550947 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.551169 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.551839 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.554460 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.555064 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.555690 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.556042 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.564332 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p"] Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.748157 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsmhz\" (UniqueName: \"kubernetes.io/projected/f50a5359-8f8b-47bc-a345-c91ace0f611f-kube-api-access-qsmhz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p\" (UID: \"f50a5359-8f8b-47bc-a345-c91ace0f611f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.748277 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f50a5359-8f8b-47bc-a345-c91ace0f611f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p\" (UID: \"f50a5359-8f8b-47bc-a345-c91ace0f611f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.748350 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f50a5359-8f8b-47bc-a345-c91ace0f611f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p\" (UID: \"f50a5359-8f8b-47bc-a345-c91ace0f611f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.850297 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsmhz\" (UniqueName: \"kubernetes.io/projected/f50a5359-8f8b-47bc-a345-c91ace0f611f-kube-api-access-qsmhz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p\" (UID: \"f50a5359-8f8b-47bc-a345-c91ace0f611f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.850409 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f50a5359-8f8b-47bc-a345-c91ace0f611f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p\" (UID: \"f50a5359-8f8b-47bc-a345-c91ace0f611f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.850503 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f50a5359-8f8b-47bc-a345-c91ace0f611f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p\" (UID: \"f50a5359-8f8b-47bc-a345-c91ace0f611f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.857563 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f50a5359-8f8b-47bc-a345-c91ace0f611f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p\" (UID: \"f50a5359-8f8b-47bc-a345-c91ace0f611f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.866477 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f50a5359-8f8b-47bc-a345-c91ace0f611f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p\" (UID: \"f50a5359-8f8b-47bc-a345-c91ace0f611f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" Jan 28 15:32:47 crc kubenswrapper[4981]: I0128 15:32:47.883667 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsmhz\" (UniqueName: \"kubernetes.io/projected/f50a5359-8f8b-47bc-a345-c91ace0f611f-kube-api-access-qsmhz\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p\" (UID: \"f50a5359-8f8b-47bc-a345-c91ace0f611f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" Jan 28 15:32:48 crc kubenswrapper[4981]: I0128 15:32:48.182556 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" Jan 28 15:32:48 crc kubenswrapper[4981]: I0128 15:32:48.624644 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p"] Jan 28 15:32:49 crc kubenswrapper[4981]: I0128 15:32:49.455253 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" event={"ID":"f50a5359-8f8b-47bc-a345-c91ace0f611f","Type":"ContainerStarted","Data":"9e215e889a44a68fb83d934da45b11de2a0fa18bbbf758985726f1b7988ca11f"} Jan 28 15:32:49 crc kubenswrapper[4981]: I0128 15:32:49.455697 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" event={"ID":"f50a5359-8f8b-47bc-a345-c91ace0f611f","Type":"ContainerStarted","Data":"34a983cae73045127b34839a387a65decd77450285c1aa4abafaea67e3ddfec1"} Jan 28 15:32:49 crc kubenswrapper[4981]: I0128 15:32:49.475726 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" podStartSLOduration=2.028546283 podStartE2EDuration="2.475703425s" podCreationTimestamp="2026-01-28 15:32:47 +0000 UTC" firstStartedPulling="2026-01-28 15:32:48.633815636 +0000 UTC m=+1780.085973877" lastFinishedPulling="2026-01-28 15:32:49.080972778 +0000 UTC m=+1780.533131019" observedRunningTime="2026-01-28 15:32:49.470850148 +0000 UTC m=+1780.923008429" watchObservedRunningTime="2026-01-28 15:32:49.475703425 +0000 UTC m=+1780.927861676" Jan 28 15:32:54 crc kubenswrapper[4981]: I0128 15:32:54.318371 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:32:54 crc kubenswrapper[4981]: E0128 15:32:54.319066 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:33:09 crc kubenswrapper[4981]: I0128 15:33:09.329617 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:33:09 crc kubenswrapper[4981]: E0128 15:33:09.335537 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:33:14 crc kubenswrapper[4981]: I0128 15:33:14.161984 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8vg58"] Jan 28 15:33:14 crc kubenswrapper[4981]: I0128 15:33:14.176213 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:14 crc kubenswrapper[4981]: I0128 15:33:14.198965 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vg58"] Jan 28 15:33:14 crc kubenswrapper[4981]: I0128 15:33:14.303405 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb43acd-6c3a-4ec3-bed3-69b07172a785-utilities\") pod \"certified-operators-8vg58\" (UID: \"7eb43acd-6c3a-4ec3-bed3-69b07172a785\") " pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:14 crc kubenswrapper[4981]: I0128 15:33:14.303559 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb43acd-6c3a-4ec3-bed3-69b07172a785-catalog-content\") pod \"certified-operators-8vg58\" (UID: \"7eb43acd-6c3a-4ec3-bed3-69b07172a785\") " pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:14 crc kubenswrapper[4981]: I0128 15:33:14.303944 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbhcg\" (UniqueName: \"kubernetes.io/projected/7eb43acd-6c3a-4ec3-bed3-69b07172a785-kube-api-access-hbhcg\") pod \"certified-operators-8vg58\" (UID: \"7eb43acd-6c3a-4ec3-bed3-69b07172a785\") " pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:14 crc kubenswrapper[4981]: I0128 15:33:14.406132 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbhcg\" (UniqueName: \"kubernetes.io/projected/7eb43acd-6c3a-4ec3-bed3-69b07172a785-kube-api-access-hbhcg\") pod \"certified-operators-8vg58\" (UID: \"7eb43acd-6c3a-4ec3-bed3-69b07172a785\") " pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:14 crc kubenswrapper[4981]: I0128 15:33:14.406240 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb43acd-6c3a-4ec3-bed3-69b07172a785-utilities\") pod \"certified-operators-8vg58\" (UID: \"7eb43acd-6c3a-4ec3-bed3-69b07172a785\") " pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:14 crc kubenswrapper[4981]: I0128 15:33:14.406272 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb43acd-6c3a-4ec3-bed3-69b07172a785-catalog-content\") pod \"certified-operators-8vg58\" (UID: \"7eb43acd-6c3a-4ec3-bed3-69b07172a785\") " pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:14 crc kubenswrapper[4981]: I0128 15:33:14.406745 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb43acd-6c3a-4ec3-bed3-69b07172a785-catalog-content\") pod \"certified-operators-8vg58\" (UID: \"7eb43acd-6c3a-4ec3-bed3-69b07172a785\") " pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:14 crc kubenswrapper[4981]: I0128 15:33:14.406896 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb43acd-6c3a-4ec3-bed3-69b07172a785-utilities\") pod \"certified-operators-8vg58\" (UID: \"7eb43acd-6c3a-4ec3-bed3-69b07172a785\") " pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:14 crc kubenswrapper[4981]: I0128 15:33:14.427971 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbhcg\" (UniqueName: \"kubernetes.io/projected/7eb43acd-6c3a-4ec3-bed3-69b07172a785-kube-api-access-hbhcg\") pod \"certified-operators-8vg58\" (UID: \"7eb43acd-6c3a-4ec3-bed3-69b07172a785\") " pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:14 crc kubenswrapper[4981]: I0128 15:33:14.525410 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:15 crc kubenswrapper[4981]: I0128 15:33:15.013873 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vg58"] Jan 28 15:33:15 crc kubenswrapper[4981]: I0128 15:33:15.738217 4981 generic.go:334] "Generic (PLEG): container finished" podID="7eb43acd-6c3a-4ec3-bed3-69b07172a785" containerID="48b40601cd66a39b370e9d5e2fa96d59fc05679d19d47ccd66c1327dde87eade" exitCode=0 Jan 28 15:33:15 crc kubenswrapper[4981]: I0128 15:33:15.738257 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vg58" event={"ID":"7eb43acd-6c3a-4ec3-bed3-69b07172a785","Type":"ContainerDied","Data":"48b40601cd66a39b370e9d5e2fa96d59fc05679d19d47ccd66c1327dde87eade"} Jan 28 15:33:15 crc kubenswrapper[4981]: I0128 15:33:15.738683 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vg58" event={"ID":"7eb43acd-6c3a-4ec3-bed3-69b07172a785","Type":"ContainerStarted","Data":"eeff73dbd732d7d413c0ef438fde46edd7903eda4ffd1375cc85a0ff421a8470"} Jan 28 15:33:17 crc kubenswrapper[4981]: I0128 15:33:17.760589 4981 generic.go:334] "Generic (PLEG): container finished" podID="7eb43acd-6c3a-4ec3-bed3-69b07172a785" containerID="6293fefec5c7f0ae47f783fd085d6a23613a915041feebef8d7373cbc53adbd3" exitCode=0 Jan 28 15:33:17 crc kubenswrapper[4981]: I0128 15:33:17.760802 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vg58" event={"ID":"7eb43acd-6c3a-4ec3-bed3-69b07172a785","Type":"ContainerDied","Data":"6293fefec5c7f0ae47f783fd085d6a23613a915041feebef8d7373cbc53adbd3"} Jan 28 15:33:18 crc kubenswrapper[4981]: I0128 15:33:18.772453 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vg58" event={"ID":"7eb43acd-6c3a-4ec3-bed3-69b07172a785","Type":"ContainerStarted","Data":"f4d654b3a22bbb0a51ec3a15d35ae4c89d4a785e35de6ff984fce2fa103280f6"} Jan 28 15:33:18 crc kubenswrapper[4981]: I0128 15:33:18.805083 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8vg58" podStartSLOduration=2.220205849 podStartE2EDuration="4.805056526s" podCreationTimestamp="2026-01-28 15:33:14 +0000 UTC" firstStartedPulling="2026-01-28 15:33:15.743600706 +0000 UTC m=+1807.195758967" lastFinishedPulling="2026-01-28 15:33:18.328451403 +0000 UTC m=+1809.780609644" observedRunningTime="2026-01-28 15:33:18.788646306 +0000 UTC m=+1810.240804557" watchObservedRunningTime="2026-01-28 15:33:18.805056526 +0000 UTC m=+1810.257214777" Jan 28 15:33:22 crc kubenswrapper[4981]: I0128 15:33:22.059098 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-t6ht4"] Jan 28 15:33:22 crc kubenswrapper[4981]: I0128 15:33:22.069323 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-t6ht4"] Jan 28 15:33:22 crc kubenswrapper[4981]: I0128 15:33:22.319379 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:33:22 crc kubenswrapper[4981]: E0128 15:33:22.319647 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:33:23 crc kubenswrapper[4981]: I0128 15:33:23.331047 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7722b5f2-e226-483f-9ae3-d2b5a9e5a605" path="/var/lib/kubelet/pods/7722b5f2-e226-483f-9ae3-d2b5a9e5a605/volumes" Jan 28 15:33:24 crc kubenswrapper[4981]: I0128 15:33:24.525956 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:24 crc kubenswrapper[4981]: I0128 15:33:24.526339 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:24 crc kubenswrapper[4981]: I0128 15:33:24.606571 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:24 crc kubenswrapper[4981]: I0128 15:33:24.878766 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:24 crc kubenswrapper[4981]: I0128 15:33:24.938292 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8vg58"] Jan 28 15:33:26 crc kubenswrapper[4981]: I0128 15:33:26.851560 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8vg58" podUID="7eb43acd-6c3a-4ec3-bed3-69b07172a785" containerName="registry-server" containerID="cri-o://f4d654b3a22bbb0a51ec3a15d35ae4c89d4a785e35de6ff984fce2fa103280f6" gracePeriod=2 Jan 28 15:33:27 crc kubenswrapper[4981]: I0128 15:33:27.364775 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:27 crc kubenswrapper[4981]: I0128 15:33:27.480069 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbhcg\" (UniqueName: \"kubernetes.io/projected/7eb43acd-6c3a-4ec3-bed3-69b07172a785-kube-api-access-hbhcg\") pod \"7eb43acd-6c3a-4ec3-bed3-69b07172a785\" (UID: \"7eb43acd-6c3a-4ec3-bed3-69b07172a785\") " Jan 28 15:33:27 crc kubenswrapper[4981]: I0128 15:33:27.480125 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb43acd-6c3a-4ec3-bed3-69b07172a785-catalog-content\") pod \"7eb43acd-6c3a-4ec3-bed3-69b07172a785\" (UID: \"7eb43acd-6c3a-4ec3-bed3-69b07172a785\") " Jan 28 15:33:27 crc kubenswrapper[4981]: I0128 15:33:27.481390 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb43acd-6c3a-4ec3-bed3-69b07172a785-utilities\") pod \"7eb43acd-6c3a-4ec3-bed3-69b07172a785\" (UID: \"7eb43acd-6c3a-4ec3-bed3-69b07172a785\") " Jan 28 15:33:27 crc kubenswrapper[4981]: I0128 15:33:27.482832 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb43acd-6c3a-4ec3-bed3-69b07172a785-utilities" (OuterVolumeSpecName: "utilities") pod "7eb43acd-6c3a-4ec3-bed3-69b07172a785" (UID: "7eb43acd-6c3a-4ec3-bed3-69b07172a785"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:33:27 crc kubenswrapper[4981]: I0128 15:33:27.483378 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eb43acd-6c3a-4ec3-bed3-69b07172a785-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:27 crc kubenswrapper[4981]: I0128 15:33:27.486560 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb43acd-6c3a-4ec3-bed3-69b07172a785-kube-api-access-hbhcg" (OuterVolumeSpecName: "kube-api-access-hbhcg") pod "7eb43acd-6c3a-4ec3-bed3-69b07172a785" (UID: "7eb43acd-6c3a-4ec3-bed3-69b07172a785"). InnerVolumeSpecName "kube-api-access-hbhcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:33:27 crc kubenswrapper[4981]: I0128 15:33:27.585116 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbhcg\" (UniqueName: \"kubernetes.io/projected/7eb43acd-6c3a-4ec3-bed3-69b07172a785-kube-api-access-hbhcg\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:27 crc kubenswrapper[4981]: I0128 15:33:27.868160 4981 generic.go:334] "Generic (PLEG): container finished" podID="7eb43acd-6c3a-4ec3-bed3-69b07172a785" containerID="f4d654b3a22bbb0a51ec3a15d35ae4c89d4a785e35de6ff984fce2fa103280f6" exitCode=0 Jan 28 15:33:27 crc kubenswrapper[4981]: I0128 15:33:27.868252 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vg58" event={"ID":"7eb43acd-6c3a-4ec3-bed3-69b07172a785","Type":"ContainerDied","Data":"f4d654b3a22bbb0a51ec3a15d35ae4c89d4a785e35de6ff984fce2fa103280f6"} Jan 28 15:33:27 crc kubenswrapper[4981]: I0128 15:33:27.868304 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vg58" event={"ID":"7eb43acd-6c3a-4ec3-bed3-69b07172a785","Type":"ContainerDied","Data":"eeff73dbd732d7d413c0ef438fde46edd7903eda4ffd1375cc85a0ff421a8470"} Jan 28 15:33:27 crc kubenswrapper[4981]: I0128 15:33:27.868335 4981 scope.go:117] "RemoveContainer" containerID="f4d654b3a22bbb0a51ec3a15d35ae4c89d4a785e35de6ff984fce2fa103280f6" Jan 28 15:33:27 crc kubenswrapper[4981]: I0128 15:33:27.868333 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vg58" Jan 28 15:33:27 crc kubenswrapper[4981]: I0128 15:33:27.898223 4981 scope.go:117] "RemoveContainer" containerID="6293fefec5c7f0ae47f783fd085d6a23613a915041feebef8d7373cbc53adbd3" Jan 28 15:33:27 crc kubenswrapper[4981]: I0128 15:33:27.933023 4981 scope.go:117] "RemoveContainer" containerID="48b40601cd66a39b370e9d5e2fa96d59fc05679d19d47ccd66c1327dde87eade" Jan 28 15:33:28 crc kubenswrapper[4981]: I0128 15:33:28.000654 4981 scope.go:117] "RemoveContainer" containerID="f4d654b3a22bbb0a51ec3a15d35ae4c89d4a785e35de6ff984fce2fa103280f6" Jan 28 15:33:28 crc kubenswrapper[4981]: E0128 15:33:28.001441 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4d654b3a22bbb0a51ec3a15d35ae4c89d4a785e35de6ff984fce2fa103280f6\": container with ID starting with f4d654b3a22bbb0a51ec3a15d35ae4c89d4a785e35de6ff984fce2fa103280f6 not found: ID does not exist" containerID="f4d654b3a22bbb0a51ec3a15d35ae4c89d4a785e35de6ff984fce2fa103280f6" Jan 28 15:33:28 crc kubenswrapper[4981]: I0128 15:33:28.001522 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4d654b3a22bbb0a51ec3a15d35ae4c89d4a785e35de6ff984fce2fa103280f6"} err="failed to get container status \"f4d654b3a22bbb0a51ec3a15d35ae4c89d4a785e35de6ff984fce2fa103280f6\": rpc error: code = NotFound desc = could not find container \"f4d654b3a22bbb0a51ec3a15d35ae4c89d4a785e35de6ff984fce2fa103280f6\": container with ID starting with f4d654b3a22bbb0a51ec3a15d35ae4c89d4a785e35de6ff984fce2fa103280f6 not found: ID does not exist" Jan 28 15:33:28 crc kubenswrapper[4981]: I0128 15:33:28.001576 4981 scope.go:117] "RemoveContainer" containerID="6293fefec5c7f0ae47f783fd085d6a23613a915041feebef8d7373cbc53adbd3" Jan 28 15:33:28 crc kubenswrapper[4981]: E0128 15:33:28.002282 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6293fefec5c7f0ae47f783fd085d6a23613a915041feebef8d7373cbc53adbd3\": container with ID starting with 6293fefec5c7f0ae47f783fd085d6a23613a915041feebef8d7373cbc53adbd3 not found: ID does not exist" containerID="6293fefec5c7f0ae47f783fd085d6a23613a915041feebef8d7373cbc53adbd3" Jan 28 15:33:28 crc kubenswrapper[4981]: I0128 15:33:28.002378 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6293fefec5c7f0ae47f783fd085d6a23613a915041feebef8d7373cbc53adbd3"} err="failed to get container status \"6293fefec5c7f0ae47f783fd085d6a23613a915041feebef8d7373cbc53adbd3\": rpc error: code = NotFound desc = could not find container \"6293fefec5c7f0ae47f783fd085d6a23613a915041feebef8d7373cbc53adbd3\": container with ID starting with 6293fefec5c7f0ae47f783fd085d6a23613a915041feebef8d7373cbc53adbd3 not found: ID does not exist" Jan 28 15:33:28 crc kubenswrapper[4981]: I0128 15:33:28.002445 4981 scope.go:117] "RemoveContainer" containerID="48b40601cd66a39b370e9d5e2fa96d59fc05679d19d47ccd66c1327dde87eade" Jan 28 15:33:28 crc kubenswrapper[4981]: E0128 15:33:28.003162 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b40601cd66a39b370e9d5e2fa96d59fc05679d19d47ccd66c1327dde87eade\": container with ID starting with 48b40601cd66a39b370e9d5e2fa96d59fc05679d19d47ccd66c1327dde87eade not found: ID does not exist" containerID="48b40601cd66a39b370e9d5e2fa96d59fc05679d19d47ccd66c1327dde87eade" Jan 28 15:33:28 crc kubenswrapper[4981]: I0128 15:33:28.003237 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b40601cd66a39b370e9d5e2fa96d59fc05679d19d47ccd66c1327dde87eade"} err="failed to get container status \"48b40601cd66a39b370e9d5e2fa96d59fc05679d19d47ccd66c1327dde87eade\": rpc error: code = NotFound desc = could not find container \"48b40601cd66a39b370e9d5e2fa96d59fc05679d19d47ccd66c1327dde87eade\": container with ID starting with 48b40601cd66a39b370e9d5e2fa96d59fc05679d19d47ccd66c1327dde87eade not found: ID does not exist" Jan 28 15:33:28 crc kubenswrapper[4981]: I0128 15:33:28.031500 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eb43acd-6c3a-4ec3-bed3-69b07172a785-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7eb43acd-6c3a-4ec3-bed3-69b07172a785" (UID: "7eb43acd-6c3a-4ec3-bed3-69b07172a785"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:33:28 crc kubenswrapper[4981]: I0128 15:33:28.097555 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eb43acd-6c3a-4ec3-bed3-69b07172a785-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:33:28 crc kubenswrapper[4981]: I0128 15:33:28.236535 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8vg58"] Jan 28 15:33:28 crc kubenswrapper[4981]: I0128 15:33:28.245984 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8vg58"] Jan 28 15:33:29 crc kubenswrapper[4981]: I0128 15:33:29.347260 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eb43acd-6c3a-4ec3-bed3-69b07172a785" path="/var/lib/kubelet/pods/7eb43acd-6c3a-4ec3-bed3-69b07172a785/volumes" Jan 28 15:33:32 crc kubenswrapper[4981]: I0128 15:33:32.320521 4981 scope.go:117] "RemoveContainer" containerID="c5944497a2ac4ceb53b6cbe937cd24e7e3dcf97274ecb387b492b2513b8a7298" Jan 28 15:33:32 crc kubenswrapper[4981]: I0128 15:33:32.387967 4981 scope.go:117] "RemoveContainer" containerID="21bd324287996431c55b2bf3804ea5d22ed628d8e31df40fee954b5b069d4c92" Jan 28 15:33:32 crc kubenswrapper[4981]: I0128 15:33:32.434001 4981 scope.go:117] "RemoveContainer" containerID="75e50301a82d3a1906792190c6d192eff716c7b2d6bd903666385be3125b6794" Jan 28 15:33:34 crc kubenswrapper[4981]: I0128 15:33:34.033488 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-h8htg"] Jan 28 15:33:34 crc kubenswrapper[4981]: I0128 15:33:34.040373 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-h8htg"] Jan 28 15:33:35 crc kubenswrapper[4981]: I0128 15:33:35.332606 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a747315-c181-4459-ae1d-3c0c5252efb7" path="/var/lib/kubelet/pods/5a747315-c181-4459-ae1d-3c0c5252efb7/volumes" Jan 28 15:33:37 crc kubenswrapper[4981]: I0128 15:33:37.032183 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-h9wgp"] Jan 28 15:33:37 crc kubenswrapper[4981]: I0128 15:33:37.056176 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-h9wgp"] Jan 28 15:33:37 crc kubenswrapper[4981]: I0128 15:33:37.319290 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:33:37 crc kubenswrapper[4981]: E0128 15:33:37.319645 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:33:37 crc kubenswrapper[4981]: I0128 15:33:37.341641 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="150ae7b2-4b64-48c7-86b3-71d7841afba3" path="/var/lib/kubelet/pods/150ae7b2-4b64-48c7-86b3-71d7841afba3/volumes" Jan 28 15:33:45 crc kubenswrapper[4981]: I0128 15:33:45.034847 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6vfxh"] Jan 28 15:33:45 crc kubenswrapper[4981]: I0128 15:33:45.044049 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6vfxh"] Jan 28 15:33:45 crc kubenswrapper[4981]: I0128 15:33:45.051808 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-lkbq5"] Jan 28 15:33:45 crc kubenswrapper[4981]: I0128 15:33:45.060884 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-czcg6"] Jan 28 15:33:45 crc kubenswrapper[4981]: I0128 15:33:45.068465 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-733a-account-create-update-x8f26"] Jan 28 15:33:45 crc kubenswrapper[4981]: I0128 15:33:45.075298 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-lkbq5"] Jan 28 15:33:45 crc kubenswrapper[4981]: I0128 15:33:45.081493 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-czcg6"] Jan 28 15:33:45 crc kubenswrapper[4981]: I0128 15:33:45.087370 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-51f6-account-create-update-qdrbh"] Jan 28 15:33:45 crc kubenswrapper[4981]: I0128 15:33:45.094690 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-733a-account-create-update-x8f26"] Jan 28 15:33:45 crc kubenswrapper[4981]: I0128 15:33:45.101059 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-51f6-account-create-update-qdrbh"] Jan 28 15:33:45 crc kubenswrapper[4981]: I0128 15:33:45.329400 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1994b16c-0ff3-4534-be1a-fcc718dd6eed" path="/var/lib/kubelet/pods/1994b16c-0ff3-4534-be1a-fcc718dd6eed/volumes" Jan 28 15:33:45 crc kubenswrapper[4981]: I0128 15:33:45.330069 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d4a209b-c995-42fa-9a1c-82a1c9c60e91" path="/var/lib/kubelet/pods/6d4a209b-c995-42fa-9a1c-82a1c9c60e91/volumes" Jan 28 15:33:45 crc kubenswrapper[4981]: I0128 15:33:45.330615 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73d5db70-d787-4d5b-9c2c-64859f2acf0c" path="/var/lib/kubelet/pods/73d5db70-d787-4d5b-9c2c-64859f2acf0c/volumes" Jan 28 15:33:45 crc kubenswrapper[4981]: I0128 15:33:45.331406 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85334287-4d7c-428a-bf1d-20f5511f442a" path="/var/lib/kubelet/pods/85334287-4d7c-428a-bf1d-20f5511f442a/volumes" Jan 28 15:33:45 crc kubenswrapper[4981]: I0128 15:33:45.333940 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bc6f6e3-3fae-4476-9ec3-db95f636ac09" path="/var/lib/kubelet/pods/9bc6f6e3-3fae-4476-9ec3-db95f636ac09/volumes" Jan 28 15:33:46 crc kubenswrapper[4981]: I0128 15:33:46.037476 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-41b8-account-create-update-s8rrk"] Jan 28 15:33:46 crc kubenswrapper[4981]: I0128 15:33:46.052390 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-41b8-account-create-update-s8rrk"] Jan 28 15:33:47 crc kubenswrapper[4981]: I0128 15:33:47.330389 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a51d19b-f2d1-4164-bb39-6da466eb115c" path="/var/lib/kubelet/pods/2a51d19b-f2d1-4164-bb39-6da466eb115c/volumes" Jan 28 15:33:48 crc kubenswrapper[4981]: I0128 15:33:48.319147 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:33:48 crc kubenswrapper[4981]: E0128 15:33:48.319460 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:34:03 crc kubenswrapper[4981]: I0128 15:34:03.240341 4981 generic.go:334] "Generic (PLEG): container finished" podID="f50a5359-8f8b-47bc-a345-c91ace0f611f" containerID="9e215e889a44a68fb83d934da45b11de2a0fa18bbbf758985726f1b7988ca11f" exitCode=0 Jan 28 15:34:03 crc kubenswrapper[4981]: I0128 15:34:03.240614 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" event={"ID":"f50a5359-8f8b-47bc-a345-c91ace0f611f","Type":"ContainerDied","Data":"9e215e889a44a68fb83d934da45b11de2a0fa18bbbf758985726f1b7988ca11f"} Jan 28 15:34:03 crc kubenswrapper[4981]: I0128 15:34:03.319681 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:34:03 crc kubenswrapper[4981]: E0128 15:34:03.320092 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:34:04 crc kubenswrapper[4981]: I0128 15:34:04.756709 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" Jan 28 15:34:04 crc kubenswrapper[4981]: I0128 15:34:04.855869 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f50a5359-8f8b-47bc-a345-c91ace0f611f-ssh-key-openstack-edpm-ipam\") pod \"f50a5359-8f8b-47bc-a345-c91ace0f611f\" (UID: \"f50a5359-8f8b-47bc-a345-c91ace0f611f\") " Jan 28 15:34:04 crc kubenswrapper[4981]: I0128 15:34:04.855920 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsmhz\" (UniqueName: \"kubernetes.io/projected/f50a5359-8f8b-47bc-a345-c91ace0f611f-kube-api-access-qsmhz\") pod \"f50a5359-8f8b-47bc-a345-c91ace0f611f\" (UID: \"f50a5359-8f8b-47bc-a345-c91ace0f611f\") " Jan 28 15:34:04 crc kubenswrapper[4981]: I0128 15:34:04.856052 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f50a5359-8f8b-47bc-a345-c91ace0f611f-inventory\") pod \"f50a5359-8f8b-47bc-a345-c91ace0f611f\" (UID: \"f50a5359-8f8b-47bc-a345-c91ace0f611f\") " Jan 28 15:34:04 crc kubenswrapper[4981]: I0128 15:34:04.865524 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f50a5359-8f8b-47bc-a345-c91ace0f611f-kube-api-access-qsmhz" (OuterVolumeSpecName: "kube-api-access-qsmhz") pod "f50a5359-8f8b-47bc-a345-c91ace0f611f" (UID: "f50a5359-8f8b-47bc-a345-c91ace0f611f"). InnerVolumeSpecName "kube-api-access-qsmhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:04 crc kubenswrapper[4981]: I0128 15:34:04.896727 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f50a5359-8f8b-47bc-a345-c91ace0f611f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f50a5359-8f8b-47bc-a345-c91ace0f611f" (UID: "f50a5359-8f8b-47bc-a345-c91ace0f611f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:04 crc kubenswrapper[4981]: I0128 15:34:04.916715 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f50a5359-8f8b-47bc-a345-c91ace0f611f-inventory" (OuterVolumeSpecName: "inventory") pod "f50a5359-8f8b-47bc-a345-c91ace0f611f" (UID: "f50a5359-8f8b-47bc-a345-c91ace0f611f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:04 crc kubenswrapper[4981]: I0128 15:34:04.959094 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f50a5359-8f8b-47bc-a345-c91ace0f611f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:04 crc kubenswrapper[4981]: I0128 15:34:04.959137 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsmhz\" (UniqueName: \"kubernetes.io/projected/f50a5359-8f8b-47bc-a345-c91ace0f611f-kube-api-access-qsmhz\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:04 crc kubenswrapper[4981]: I0128 15:34:04.959150 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f50a5359-8f8b-47bc-a345-c91ace0f611f-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.265521 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" event={"ID":"f50a5359-8f8b-47bc-a345-c91ace0f611f","Type":"ContainerDied","Data":"34a983cae73045127b34839a387a65decd77450285c1aa4abafaea67e3ddfec1"} Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.265566 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34a983cae73045127b34839a387a65decd77450285c1aa4abafaea67e3ddfec1" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.265588 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.379680 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts"] Jan 28 15:34:05 crc kubenswrapper[4981]: E0128 15:34:05.380104 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb43acd-6c3a-4ec3-bed3-69b07172a785" containerName="registry-server" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.380124 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb43acd-6c3a-4ec3-bed3-69b07172a785" containerName="registry-server" Jan 28 15:34:05 crc kubenswrapper[4981]: E0128 15:34:05.380157 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb43acd-6c3a-4ec3-bed3-69b07172a785" containerName="extract-utilities" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.380166 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb43acd-6c3a-4ec3-bed3-69b07172a785" containerName="extract-utilities" Jan 28 15:34:05 crc kubenswrapper[4981]: E0128 15:34:05.380207 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f50a5359-8f8b-47bc-a345-c91ace0f611f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.380218 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f50a5359-8f8b-47bc-a345-c91ace0f611f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 15:34:05 crc kubenswrapper[4981]: E0128 15:34:05.380240 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb43acd-6c3a-4ec3-bed3-69b07172a785" containerName="extract-content" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.380248 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb43acd-6c3a-4ec3-bed3-69b07172a785" containerName="extract-content" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.382878 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f50a5359-8f8b-47bc-a345-c91ace0f611f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.382938 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb43acd-6c3a-4ec3-bed3-69b07172a785" containerName="registry-server" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.383702 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.386394 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.386506 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.388554 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.388579 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.401042 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts"] Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.471170 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f654c5ca-b187-484f-b9bd-c487bda39586-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ckpts\" (UID: \"f654c5ca-b187-484f-b9bd-c487bda39586\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.471274 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f654c5ca-b187-484f-b9bd-c487bda39586-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ckpts\" (UID: \"f654c5ca-b187-484f-b9bd-c487bda39586\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.471311 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86tcm\" (UniqueName: \"kubernetes.io/projected/f654c5ca-b187-484f-b9bd-c487bda39586-kube-api-access-86tcm\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ckpts\" (UID: \"f654c5ca-b187-484f-b9bd-c487bda39586\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.573403 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f654c5ca-b187-484f-b9bd-c487bda39586-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ckpts\" (UID: \"f654c5ca-b187-484f-b9bd-c487bda39586\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.573913 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f654c5ca-b187-484f-b9bd-c487bda39586-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ckpts\" (UID: \"f654c5ca-b187-484f-b9bd-c487bda39586\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.573968 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86tcm\" (UniqueName: \"kubernetes.io/projected/f654c5ca-b187-484f-b9bd-c487bda39586-kube-api-access-86tcm\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ckpts\" (UID: \"f654c5ca-b187-484f-b9bd-c487bda39586\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.581414 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f654c5ca-b187-484f-b9bd-c487bda39586-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ckpts\" (UID: \"f654c5ca-b187-484f-b9bd-c487bda39586\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.583059 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f654c5ca-b187-484f-b9bd-c487bda39586-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ckpts\" (UID: \"f654c5ca-b187-484f-b9bd-c487bda39586\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.595148 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86tcm\" (UniqueName: \"kubernetes.io/projected/f654c5ca-b187-484f-b9bd-c487bda39586-kube-api-access-86tcm\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-ckpts\" (UID: \"f654c5ca-b187-484f-b9bd-c487bda39586\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" Jan 28 15:34:05 crc kubenswrapper[4981]: I0128 15:34:05.703463 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" Jan 28 15:34:06 crc kubenswrapper[4981]: I0128 15:34:06.133057 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts"] Jan 28 15:34:06 crc kubenswrapper[4981]: I0128 15:34:06.276806 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" event={"ID":"f654c5ca-b187-484f-b9bd-c487bda39586","Type":"ContainerStarted","Data":"47dab15297ec191369ab43413920d34a838c74ed7b59f1bbd26364be0360b7e4"} Jan 28 15:34:07 crc kubenswrapper[4981]: I0128 15:34:07.305870 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" event={"ID":"f654c5ca-b187-484f-b9bd-c487bda39586","Type":"ContainerStarted","Data":"248afc742ca46dba2d2ea0164a373527344fd820cb3b7f987ff56962af231cfe"} Jan 28 15:34:07 crc kubenswrapper[4981]: I0128 15:34:07.335223 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" podStartSLOduration=1.872127195 podStartE2EDuration="2.335179537s" podCreationTimestamp="2026-01-28 15:34:05 +0000 UTC" firstStartedPulling="2026-01-28 15:34:06.145023106 +0000 UTC m=+1857.597181387" lastFinishedPulling="2026-01-28 15:34:06.608075478 +0000 UTC m=+1858.060233729" observedRunningTime="2026-01-28 15:34:07.333343557 +0000 UTC m=+1858.785501838" watchObservedRunningTime="2026-01-28 15:34:07.335179537 +0000 UTC m=+1858.787337788" Jan 28 15:34:11 crc kubenswrapper[4981]: I0128 15:34:11.351511 4981 generic.go:334] "Generic (PLEG): container finished" podID="f654c5ca-b187-484f-b9bd-c487bda39586" containerID="248afc742ca46dba2d2ea0164a373527344fd820cb3b7f987ff56962af231cfe" exitCode=0 Jan 28 15:34:11 crc kubenswrapper[4981]: I0128 15:34:11.351628 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" event={"ID":"f654c5ca-b187-484f-b9bd-c487bda39586","Type":"ContainerDied","Data":"248afc742ca46dba2d2ea0164a373527344fd820cb3b7f987ff56962af231cfe"} Jan 28 15:34:12 crc kubenswrapper[4981]: I0128 15:34:12.772564 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" Jan 28 15:34:12 crc kubenswrapper[4981]: I0128 15:34:12.826447 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f654c5ca-b187-484f-b9bd-c487bda39586-inventory\") pod \"f654c5ca-b187-484f-b9bd-c487bda39586\" (UID: \"f654c5ca-b187-484f-b9bd-c487bda39586\") " Jan 28 15:34:12 crc kubenswrapper[4981]: I0128 15:34:12.826555 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86tcm\" (UniqueName: \"kubernetes.io/projected/f654c5ca-b187-484f-b9bd-c487bda39586-kube-api-access-86tcm\") pod \"f654c5ca-b187-484f-b9bd-c487bda39586\" (UID: \"f654c5ca-b187-484f-b9bd-c487bda39586\") " Jan 28 15:34:12 crc kubenswrapper[4981]: I0128 15:34:12.826711 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f654c5ca-b187-484f-b9bd-c487bda39586-ssh-key-openstack-edpm-ipam\") pod \"f654c5ca-b187-484f-b9bd-c487bda39586\" (UID: \"f654c5ca-b187-484f-b9bd-c487bda39586\") " Jan 28 15:34:12 crc kubenswrapper[4981]: I0128 15:34:12.837256 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f654c5ca-b187-484f-b9bd-c487bda39586-kube-api-access-86tcm" (OuterVolumeSpecName: "kube-api-access-86tcm") pod "f654c5ca-b187-484f-b9bd-c487bda39586" (UID: "f654c5ca-b187-484f-b9bd-c487bda39586"). InnerVolumeSpecName "kube-api-access-86tcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:12 crc kubenswrapper[4981]: I0128 15:34:12.857524 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f654c5ca-b187-484f-b9bd-c487bda39586-inventory" (OuterVolumeSpecName: "inventory") pod "f654c5ca-b187-484f-b9bd-c487bda39586" (UID: "f654c5ca-b187-484f-b9bd-c487bda39586"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:12 crc kubenswrapper[4981]: I0128 15:34:12.871785 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f654c5ca-b187-484f-b9bd-c487bda39586-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f654c5ca-b187-484f-b9bd-c487bda39586" (UID: "f654c5ca-b187-484f-b9bd-c487bda39586"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:12 crc kubenswrapper[4981]: I0128 15:34:12.929451 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f654c5ca-b187-484f-b9bd-c487bda39586-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:12 crc kubenswrapper[4981]: I0128 15:34:12.929508 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f654c5ca-b187-484f-b9bd-c487bda39586-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:12 crc kubenswrapper[4981]: I0128 15:34:12.929522 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86tcm\" (UniqueName: \"kubernetes.io/projected/f654c5ca-b187-484f-b9bd-c487bda39586-kube-api-access-86tcm\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.381399 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" event={"ID":"f654c5ca-b187-484f-b9bd-c487bda39586","Type":"ContainerDied","Data":"47dab15297ec191369ab43413920d34a838c74ed7b59f1bbd26364be0360b7e4"} Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.381484 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47dab15297ec191369ab43413920d34a838c74ed7b59f1bbd26364be0360b7e4" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.381553 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-ckpts" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.470588 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj"] Jan 28 15:34:13 crc kubenswrapper[4981]: E0128 15:34:13.471157 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f654c5ca-b187-484f-b9bd-c487bda39586" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.471181 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f654c5ca-b187-484f-b9bd-c487bda39586" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.471457 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f654c5ca-b187-484f-b9bd-c487bda39586" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.472245 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.478160 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.478482 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.478651 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.478712 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.488869 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj"] Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.543348 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dd2589f-e346-4ce7-a193-1e8eac0a2318-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7q8hj\" (UID: \"7dd2589f-e346-4ce7-a193-1e8eac0a2318\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.543724 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dd2589f-e346-4ce7-a193-1e8eac0a2318-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7q8hj\" (UID: \"7dd2589f-e346-4ce7-a193-1e8eac0a2318\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.544096 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgxkk\" (UniqueName: \"kubernetes.io/projected/7dd2589f-e346-4ce7-a193-1e8eac0a2318-kube-api-access-lgxkk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7q8hj\" (UID: \"7dd2589f-e346-4ce7-a193-1e8eac0a2318\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.646318 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgxkk\" (UniqueName: \"kubernetes.io/projected/7dd2589f-e346-4ce7-a193-1e8eac0a2318-kube-api-access-lgxkk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7q8hj\" (UID: \"7dd2589f-e346-4ce7-a193-1e8eac0a2318\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.646441 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dd2589f-e346-4ce7-a193-1e8eac0a2318-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7q8hj\" (UID: \"7dd2589f-e346-4ce7-a193-1e8eac0a2318\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.646539 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dd2589f-e346-4ce7-a193-1e8eac0a2318-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7q8hj\" (UID: \"7dd2589f-e346-4ce7-a193-1e8eac0a2318\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.670678 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dd2589f-e346-4ce7-a193-1e8eac0a2318-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7q8hj\" (UID: \"7dd2589f-e346-4ce7-a193-1e8eac0a2318\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.672753 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dd2589f-e346-4ce7-a193-1e8eac0a2318-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7q8hj\" (UID: \"7dd2589f-e346-4ce7-a193-1e8eac0a2318\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.673976 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgxkk\" (UniqueName: \"kubernetes.io/projected/7dd2589f-e346-4ce7-a193-1e8eac0a2318-kube-api-access-lgxkk\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-7q8hj\" (UID: \"7dd2589f-e346-4ce7-a193-1e8eac0a2318\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" Jan 28 15:34:13 crc kubenswrapper[4981]: I0128 15:34:13.798602 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" Jan 28 15:34:14 crc kubenswrapper[4981]: I0128 15:34:14.400975 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj"] Jan 28 15:34:14 crc kubenswrapper[4981]: W0128 15:34:14.411285 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dd2589f_e346_4ce7_a193_1e8eac0a2318.slice/crio-ecb55706e25f52ffd4e0a16a4d388ec1ba7de6a6e33a61c514d8978fa76a902d WatchSource:0}: Error finding container ecb55706e25f52ffd4e0a16a4d388ec1ba7de6a6e33a61c514d8978fa76a902d: Status 404 returned error can't find the container with id ecb55706e25f52ffd4e0a16a4d388ec1ba7de6a6e33a61c514d8978fa76a902d Jan 28 15:34:15 crc kubenswrapper[4981]: I0128 15:34:15.402927 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" event={"ID":"7dd2589f-e346-4ce7-a193-1e8eac0a2318","Type":"ContainerStarted","Data":"32e5427b98231fa7e08968425332321540439c6e72616c3772b6ff4f14a05659"} Jan 28 15:34:15 crc kubenswrapper[4981]: I0128 15:34:15.403273 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" event={"ID":"7dd2589f-e346-4ce7-a193-1e8eac0a2318","Type":"ContainerStarted","Data":"ecb55706e25f52ffd4e0a16a4d388ec1ba7de6a6e33a61c514d8978fa76a902d"} Jan 28 15:34:15 crc kubenswrapper[4981]: I0128 15:34:15.427314 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" podStartSLOduration=2.009030662 podStartE2EDuration="2.427289152s" podCreationTimestamp="2026-01-28 15:34:13 +0000 UTC" firstStartedPulling="2026-01-28 15:34:14.413799917 +0000 UTC m=+1865.865958158" lastFinishedPulling="2026-01-28 15:34:14.832058377 +0000 UTC m=+1866.284216648" observedRunningTime="2026-01-28 15:34:15.418471191 +0000 UTC m=+1866.870629462" watchObservedRunningTime="2026-01-28 15:34:15.427289152 +0000 UTC m=+1866.879447393" Jan 28 15:34:18 crc kubenswrapper[4981]: I0128 15:34:18.319112 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:34:18 crc kubenswrapper[4981]: E0128 15:34:18.319617 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:34:21 crc kubenswrapper[4981]: I0128 15:34:21.037439 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fpszk"] Jan 28 15:34:21 crc kubenswrapper[4981]: I0128 15:34:21.045397 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-fpszk"] Jan 28 15:34:21 crc kubenswrapper[4981]: I0128 15:34:21.338943 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7267f1cd-1602-4397-a6c4-668efb787be4" path="/var/lib/kubelet/pods/7267f1cd-1602-4397-a6c4-668efb787be4/volumes" Jan 28 15:34:32 crc kubenswrapper[4981]: I0128 15:34:32.319160 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:34:32 crc kubenswrapper[4981]: E0128 15:34:32.319801 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:34:32 crc kubenswrapper[4981]: I0128 15:34:32.563043 4981 scope.go:117] "RemoveContainer" containerID="602e647e355840485038d99df186edb1cd36a2b8fc2933d61b3ea82c234665df" Jan 28 15:34:32 crc kubenswrapper[4981]: I0128 15:34:32.588270 4981 scope.go:117] "RemoveContainer" containerID="c8270963b0feaf7876dd1797cd835f1a9ffafa2375b5a36457463111738cf5be" Jan 28 15:34:32 crc kubenswrapper[4981]: I0128 15:34:32.639995 4981 scope.go:117] "RemoveContainer" containerID="e9e00dd48ca4452cd65fcfee7bebd1e3bd503c4ef5a635a2575aac1b82826d42" Jan 28 15:34:32 crc kubenswrapper[4981]: I0128 15:34:32.713208 4981 scope.go:117] "RemoveContainer" containerID="7a8015ad5efc7040ba91b0cc760c39bcbbc6210c43204ce4f08717c582641cd3" Jan 28 15:34:32 crc kubenswrapper[4981]: I0128 15:34:32.741616 4981 scope.go:117] "RemoveContainer" containerID="4bdb329cede3dd3c6dfb1944d29cf6db0d8eb4bed1234c78148c65c225a843fc" Jan 28 15:34:32 crc kubenswrapper[4981]: I0128 15:34:32.811657 4981 scope.go:117] "RemoveContainer" containerID="f577551ae3f69b50a3f1f480a541a3283df0b2c22a53555560b2e5509f4d53ce" Jan 28 15:34:32 crc kubenswrapper[4981]: I0128 15:34:32.870271 4981 scope.go:117] "RemoveContainer" containerID="ef41d371bc38090af3ea5f5d5b836da62de3f1eb3c0855d616e9eebdaa5e2145" Jan 28 15:34:32 crc kubenswrapper[4981]: I0128 15:34:32.912011 4981 scope.go:117] "RemoveContainer" containerID="4daa23ca251918f431c71796a106f059b31f9684e7815db0ea87cfcbd133962d" Jan 28 15:34:32 crc kubenswrapper[4981]: I0128 15:34:32.953556 4981 scope.go:117] "RemoveContainer" containerID="83ba1d5df8289aca43300f7d9127640f14033dff6e141234cf72734923d078db" Jan 28 15:34:46 crc kubenswrapper[4981]: I0128 15:34:46.318557 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:34:46 crc kubenswrapper[4981]: E0128 15:34:46.319296 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:34:51 crc kubenswrapper[4981]: I0128 15:34:51.048229 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-w4dpt"] Jan 28 15:34:51 crc kubenswrapper[4981]: I0128 15:34:51.059647 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-w4dpt"] Jan 28 15:34:51 crc kubenswrapper[4981]: I0128 15:34:51.381670 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61d4eb9c-3032-485a-9c38-983eae66cbc8" path="/var/lib/kubelet/pods/61d4eb9c-3032-485a-9c38-983eae66cbc8/volumes" Jan 28 15:34:51 crc kubenswrapper[4981]: I0128 15:34:51.813009 4981 generic.go:334] "Generic (PLEG): container finished" podID="7dd2589f-e346-4ce7-a193-1e8eac0a2318" containerID="32e5427b98231fa7e08968425332321540439c6e72616c3772b6ff4f14a05659" exitCode=0 Jan 28 15:34:51 crc kubenswrapper[4981]: I0128 15:34:51.813046 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" event={"ID":"7dd2589f-e346-4ce7-a193-1e8eac0a2318","Type":"ContainerDied","Data":"32e5427b98231fa7e08968425332321540439c6e72616c3772b6ff4f14a05659"} Jan 28 15:34:52 crc kubenswrapper[4981]: I0128 15:34:52.031372 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fnttx"] Jan 28 15:34:52 crc kubenswrapper[4981]: I0128 15:34:52.044452 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fnttx"] Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.260257 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.344288 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a50e2ce1-a7ce-4b26-b97d-6823b74cd974" path="/var/lib/kubelet/pods/a50e2ce1-a7ce-4b26-b97d-6823b74cd974/volumes" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.390927 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgxkk\" (UniqueName: \"kubernetes.io/projected/7dd2589f-e346-4ce7-a193-1e8eac0a2318-kube-api-access-lgxkk\") pod \"7dd2589f-e346-4ce7-a193-1e8eac0a2318\" (UID: \"7dd2589f-e346-4ce7-a193-1e8eac0a2318\") " Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.391042 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dd2589f-e346-4ce7-a193-1e8eac0a2318-inventory\") pod \"7dd2589f-e346-4ce7-a193-1e8eac0a2318\" (UID: \"7dd2589f-e346-4ce7-a193-1e8eac0a2318\") " Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.391149 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dd2589f-e346-4ce7-a193-1e8eac0a2318-ssh-key-openstack-edpm-ipam\") pod \"7dd2589f-e346-4ce7-a193-1e8eac0a2318\" (UID: \"7dd2589f-e346-4ce7-a193-1e8eac0a2318\") " Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.397654 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd2589f-e346-4ce7-a193-1e8eac0a2318-kube-api-access-lgxkk" (OuterVolumeSpecName: "kube-api-access-lgxkk") pod "7dd2589f-e346-4ce7-a193-1e8eac0a2318" (UID: "7dd2589f-e346-4ce7-a193-1e8eac0a2318"). InnerVolumeSpecName "kube-api-access-lgxkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.418087 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd2589f-e346-4ce7-a193-1e8eac0a2318-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7dd2589f-e346-4ce7-a193-1e8eac0a2318" (UID: "7dd2589f-e346-4ce7-a193-1e8eac0a2318"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.427339 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd2589f-e346-4ce7-a193-1e8eac0a2318-inventory" (OuterVolumeSpecName: "inventory") pod "7dd2589f-e346-4ce7-a193-1e8eac0a2318" (UID: "7dd2589f-e346-4ce7-a193-1e8eac0a2318"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.493429 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgxkk\" (UniqueName: \"kubernetes.io/projected/7dd2589f-e346-4ce7-a193-1e8eac0a2318-kube-api-access-lgxkk\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.493470 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7dd2589f-e346-4ce7-a193-1e8eac0a2318-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.493483 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7dd2589f-e346-4ce7-a193-1e8eac0a2318-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.832613 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" event={"ID":"7dd2589f-e346-4ce7-a193-1e8eac0a2318","Type":"ContainerDied","Data":"ecb55706e25f52ffd4e0a16a4d388ec1ba7de6a6e33a61c514d8978fa76a902d"} Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.832662 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecb55706e25f52ffd4e0a16a4d388ec1ba7de6a6e33a61c514d8978fa76a902d" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.832713 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-7q8hj" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.952935 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6"] Jan 28 15:34:53 crc kubenswrapper[4981]: E0128 15:34:53.957835 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd2589f-e346-4ce7-a193-1e8eac0a2318" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.957955 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd2589f-e346-4ce7-a193-1e8eac0a2318" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.958294 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd2589f-e346-4ce7-a193-1e8eac0a2318" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.959064 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.961774 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.961956 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.962521 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.968139 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6"] Jan 28 15:34:53 crc kubenswrapper[4981]: I0128 15:34:53.973949 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:34:54 crc kubenswrapper[4981]: I0128 15:34:54.003361 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5xrr\" (UniqueName: \"kubernetes.io/projected/c7607b3a-6cc7-4240-acd3-866b7d39e6be-kube-api-access-v5xrr\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6\" (UID: \"c7607b3a-6cc7-4240-acd3-866b7d39e6be\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" Jan 28 15:34:54 crc kubenswrapper[4981]: I0128 15:34:54.003899 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7607b3a-6cc7-4240-acd3-866b7d39e6be-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6\" (UID: \"c7607b3a-6cc7-4240-acd3-866b7d39e6be\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" Jan 28 15:34:54 crc kubenswrapper[4981]: I0128 15:34:54.004114 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7607b3a-6cc7-4240-acd3-866b7d39e6be-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6\" (UID: \"c7607b3a-6cc7-4240-acd3-866b7d39e6be\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" Jan 28 15:34:54 crc kubenswrapper[4981]: I0128 15:34:54.106275 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7607b3a-6cc7-4240-acd3-866b7d39e6be-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6\" (UID: \"c7607b3a-6cc7-4240-acd3-866b7d39e6be\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" Jan 28 15:34:54 crc kubenswrapper[4981]: I0128 15:34:54.106335 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5xrr\" (UniqueName: \"kubernetes.io/projected/c7607b3a-6cc7-4240-acd3-866b7d39e6be-kube-api-access-v5xrr\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6\" (UID: \"c7607b3a-6cc7-4240-acd3-866b7d39e6be\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" Jan 28 15:34:54 crc kubenswrapper[4981]: I0128 15:34:54.106556 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7607b3a-6cc7-4240-acd3-866b7d39e6be-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6\" (UID: \"c7607b3a-6cc7-4240-acd3-866b7d39e6be\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" Jan 28 15:34:54 crc kubenswrapper[4981]: I0128 15:34:54.109726 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7607b3a-6cc7-4240-acd3-866b7d39e6be-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6\" (UID: \"c7607b3a-6cc7-4240-acd3-866b7d39e6be\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" Jan 28 15:34:54 crc kubenswrapper[4981]: I0128 15:34:54.110011 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7607b3a-6cc7-4240-acd3-866b7d39e6be-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6\" (UID: \"c7607b3a-6cc7-4240-acd3-866b7d39e6be\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" Jan 28 15:34:54 crc kubenswrapper[4981]: I0128 15:34:54.124834 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5xrr\" (UniqueName: \"kubernetes.io/projected/c7607b3a-6cc7-4240-acd3-866b7d39e6be-kube-api-access-v5xrr\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6\" (UID: \"c7607b3a-6cc7-4240-acd3-866b7d39e6be\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" Jan 28 15:34:54 crc kubenswrapper[4981]: I0128 15:34:54.315582 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" Jan 28 15:34:54 crc kubenswrapper[4981]: I0128 15:34:54.915840 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6"] Jan 28 15:34:55 crc kubenswrapper[4981]: I0128 15:34:55.863518 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" event={"ID":"c7607b3a-6cc7-4240-acd3-866b7d39e6be","Type":"ContainerStarted","Data":"61c5987feb3b0ced72d73e63e0789348a3789ad119f0d34f6dfa26728466905a"} Jan 28 15:34:57 crc kubenswrapper[4981]: I0128 15:34:57.319372 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:34:57 crc kubenswrapper[4981]: E0128 15:34:57.320239 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:34:57 crc kubenswrapper[4981]: I0128 15:34:57.887561 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" event={"ID":"c7607b3a-6cc7-4240-acd3-866b7d39e6be","Type":"ContainerStarted","Data":"a0f732ece9bd10908dc84c225a686a6f114254765ec7b7a2f5fc0c6747a03211"} Jan 28 15:34:57 crc kubenswrapper[4981]: I0128 15:34:57.911819 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" podStartSLOduration=2.916424746 podStartE2EDuration="4.911803633s" podCreationTimestamp="2026-01-28 15:34:53 +0000 UTC" firstStartedPulling="2026-01-28 15:34:54.921930549 +0000 UTC m=+1906.374088800" lastFinishedPulling="2026-01-28 15:34:56.917309406 +0000 UTC m=+1908.369467687" observedRunningTime="2026-01-28 15:34:57.902644544 +0000 UTC m=+1909.354802785" watchObservedRunningTime="2026-01-28 15:34:57.911803633 +0000 UTC m=+1909.363961874" Jan 28 15:35:12 crc kubenswrapper[4981]: I0128 15:35:12.319297 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:35:12 crc kubenswrapper[4981]: E0128 15:35:12.320177 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:35:25 crc kubenswrapper[4981]: I0128 15:35:25.320094 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:35:25 crc kubenswrapper[4981]: E0128 15:35:25.321071 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:35:33 crc kubenswrapper[4981]: I0128 15:35:33.108695 4981 scope.go:117] "RemoveContainer" containerID="991e80d7e4e80556bf5702b6d6536add2ef70dfafe7074eeaf49904f5da73d0f" Jan 28 15:35:33 crc kubenswrapper[4981]: I0128 15:35:33.146926 4981 scope.go:117] "RemoveContainer" containerID="a4a076f57511672f32054742b5ecb9d34856a0aac3ffdb4cbf6f8423c994c393" Jan 28 15:35:33 crc kubenswrapper[4981]: I0128 15:35:33.200862 4981 scope.go:117] "RemoveContainer" containerID="95e7cb789aaeb7fab4347ecc141e8577533d007fc8ea5900d99242fc2942f77d" Jan 28 15:35:33 crc kubenswrapper[4981]: I0128 15:35:33.271469 4981 scope.go:117] "RemoveContainer" containerID="f676ab8370dce7cc95748402e276fe5dd52680f6df0fd6a3dffdece504da8123" Jan 28 15:35:33 crc kubenswrapper[4981]: I0128 15:35:33.320948 4981 scope.go:117] "RemoveContainer" containerID="50c7a27897c30105fdb7637299bd2765781a4983b2ce7c9e785078085afa7e63" Jan 28 15:35:36 crc kubenswrapper[4981]: I0128 15:35:36.047481 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-chx2v"] Jan 28 15:35:36 crc kubenswrapper[4981]: I0128 15:35:36.054683 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-chx2v"] Jan 28 15:35:37 crc kubenswrapper[4981]: I0128 15:35:37.328367 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="494b1558-721b-44b1-a712-7bb60eb3f5cc" path="/var/lib/kubelet/pods/494b1558-721b-44b1-a712-7bb60eb3f5cc/volumes" Jan 28 15:35:40 crc kubenswrapper[4981]: I0128 15:35:40.319072 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:35:40 crc kubenswrapper[4981]: E0128 15:35:40.319741 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:35:44 crc kubenswrapper[4981]: I0128 15:35:44.402318 4981 generic.go:334] "Generic (PLEG): container finished" podID="c7607b3a-6cc7-4240-acd3-866b7d39e6be" containerID="a0f732ece9bd10908dc84c225a686a6f114254765ec7b7a2f5fc0c6747a03211" exitCode=0 Jan 28 15:35:44 crc kubenswrapper[4981]: I0128 15:35:44.402858 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" event={"ID":"c7607b3a-6cc7-4240-acd3-866b7d39e6be","Type":"ContainerDied","Data":"a0f732ece9bd10908dc84c225a686a6f114254765ec7b7a2f5fc0c6747a03211"} Jan 28 15:35:45 crc kubenswrapper[4981]: I0128 15:35:45.918714 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.026631 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7607b3a-6cc7-4240-acd3-866b7d39e6be-inventory\") pod \"c7607b3a-6cc7-4240-acd3-866b7d39e6be\" (UID: \"c7607b3a-6cc7-4240-acd3-866b7d39e6be\") " Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.026903 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5xrr\" (UniqueName: \"kubernetes.io/projected/c7607b3a-6cc7-4240-acd3-866b7d39e6be-kube-api-access-v5xrr\") pod \"c7607b3a-6cc7-4240-acd3-866b7d39e6be\" (UID: \"c7607b3a-6cc7-4240-acd3-866b7d39e6be\") " Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.026942 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7607b3a-6cc7-4240-acd3-866b7d39e6be-ssh-key-openstack-edpm-ipam\") pod \"c7607b3a-6cc7-4240-acd3-866b7d39e6be\" (UID: \"c7607b3a-6cc7-4240-acd3-866b7d39e6be\") " Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.048419 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7607b3a-6cc7-4240-acd3-866b7d39e6be-kube-api-access-v5xrr" (OuterVolumeSpecName: "kube-api-access-v5xrr") pod "c7607b3a-6cc7-4240-acd3-866b7d39e6be" (UID: "c7607b3a-6cc7-4240-acd3-866b7d39e6be"). InnerVolumeSpecName "kube-api-access-v5xrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.054933 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7607b3a-6cc7-4240-acd3-866b7d39e6be-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c7607b3a-6cc7-4240-acd3-866b7d39e6be" (UID: "c7607b3a-6cc7-4240-acd3-866b7d39e6be"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.063134 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7607b3a-6cc7-4240-acd3-866b7d39e6be-inventory" (OuterVolumeSpecName: "inventory") pod "c7607b3a-6cc7-4240-acd3-866b7d39e6be" (UID: "c7607b3a-6cc7-4240-acd3-866b7d39e6be"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.129798 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c7607b3a-6cc7-4240-acd3-866b7d39e6be-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.129832 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5xrr\" (UniqueName: \"kubernetes.io/projected/c7607b3a-6cc7-4240-acd3-866b7d39e6be-kube-api-access-v5xrr\") on node \"crc\" DevicePath \"\"" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.129845 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c7607b3a-6cc7-4240-acd3-866b7d39e6be-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.430923 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" event={"ID":"c7607b3a-6cc7-4240-acd3-866b7d39e6be","Type":"ContainerDied","Data":"61c5987feb3b0ced72d73e63e0789348a3789ad119f0d34f6dfa26728466905a"} Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.430990 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61c5987feb3b0ced72d73e63e0789348a3789ad119f0d34f6dfa26728466905a" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.431515 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.560176 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gllft"] Jan 28 15:35:46 crc kubenswrapper[4981]: E0128 15:35:46.561432 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7607b3a-6cc7-4240-acd3-866b7d39e6be" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.561455 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7607b3a-6cc7-4240-acd3-866b7d39e6be" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.561697 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7607b3a-6cc7-4240-acd3-866b7d39e6be" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.563425 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gllft" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.565424 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.565551 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.565641 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.565687 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.589322 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gllft"] Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.741143 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2531e40d-1556-4368-b4db-be4d6364097a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gllft\" (UID: \"2531e40d-1556-4368-b4db-be4d6364097a\") " pod="openstack/ssh-known-hosts-edpm-deployment-gllft" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.741250 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx5tx\" (UniqueName: \"kubernetes.io/projected/2531e40d-1556-4368-b4db-be4d6364097a-kube-api-access-fx5tx\") pod \"ssh-known-hosts-edpm-deployment-gllft\" (UID: \"2531e40d-1556-4368-b4db-be4d6364097a\") " pod="openstack/ssh-known-hosts-edpm-deployment-gllft" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.741658 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2531e40d-1556-4368-b4db-be4d6364097a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gllft\" (UID: \"2531e40d-1556-4368-b4db-be4d6364097a\") " pod="openstack/ssh-known-hosts-edpm-deployment-gllft" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.843447 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx5tx\" (UniqueName: \"kubernetes.io/projected/2531e40d-1556-4368-b4db-be4d6364097a-kube-api-access-fx5tx\") pod \"ssh-known-hosts-edpm-deployment-gllft\" (UID: \"2531e40d-1556-4368-b4db-be4d6364097a\") " pod="openstack/ssh-known-hosts-edpm-deployment-gllft" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.843640 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2531e40d-1556-4368-b4db-be4d6364097a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gllft\" (UID: \"2531e40d-1556-4368-b4db-be4d6364097a\") " pod="openstack/ssh-known-hosts-edpm-deployment-gllft" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.843688 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2531e40d-1556-4368-b4db-be4d6364097a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gllft\" (UID: \"2531e40d-1556-4368-b4db-be4d6364097a\") " pod="openstack/ssh-known-hosts-edpm-deployment-gllft" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.848810 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2531e40d-1556-4368-b4db-be4d6364097a-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gllft\" (UID: \"2531e40d-1556-4368-b4db-be4d6364097a\") " pod="openstack/ssh-known-hosts-edpm-deployment-gllft" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.849655 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2531e40d-1556-4368-b4db-be4d6364097a-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gllft\" (UID: \"2531e40d-1556-4368-b4db-be4d6364097a\") " pod="openstack/ssh-known-hosts-edpm-deployment-gllft" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.861881 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx5tx\" (UniqueName: \"kubernetes.io/projected/2531e40d-1556-4368-b4db-be4d6364097a-kube-api-access-fx5tx\") pod \"ssh-known-hosts-edpm-deployment-gllft\" (UID: \"2531e40d-1556-4368-b4db-be4d6364097a\") " pod="openstack/ssh-known-hosts-edpm-deployment-gllft" Jan 28 15:35:46 crc kubenswrapper[4981]: I0128 15:35:46.886883 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gllft" Jan 28 15:35:47 crc kubenswrapper[4981]: I0128 15:35:47.428881 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gllft"] Jan 28 15:35:47 crc kubenswrapper[4981]: I0128 15:35:47.453148 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gllft" event={"ID":"2531e40d-1556-4368-b4db-be4d6364097a","Type":"ContainerStarted","Data":"23fcfb0af4e274a9d5f22ad0cc9571a81b0b23e60c3ca1a4009c450fdf931cc3"} Jan 28 15:35:48 crc kubenswrapper[4981]: I0128 15:35:48.467780 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gllft" event={"ID":"2531e40d-1556-4368-b4db-be4d6364097a","Type":"ContainerStarted","Data":"eeaffc987f79800bd38e541f91c1288a461801b5342d7ad84f0b56091c73475a"} Jan 28 15:35:48 crc kubenswrapper[4981]: I0128 15:35:48.496529 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-gllft" podStartSLOduration=2.062774122 podStartE2EDuration="2.496504089s" podCreationTimestamp="2026-01-28 15:35:46 +0000 UTC" firstStartedPulling="2026-01-28 15:35:47.433297268 +0000 UTC m=+1958.885455509" lastFinishedPulling="2026-01-28 15:35:47.867027215 +0000 UTC m=+1959.319185476" observedRunningTime="2026-01-28 15:35:48.492409029 +0000 UTC m=+1959.944567320" watchObservedRunningTime="2026-01-28 15:35:48.496504089 +0000 UTC m=+1959.948662340" Jan 28 15:35:52 crc kubenswrapper[4981]: I0128 15:35:52.318622 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:35:52 crc kubenswrapper[4981]: E0128 15:35:52.319260 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:35:54 crc kubenswrapper[4981]: I0128 15:35:54.542075 4981 generic.go:334] "Generic (PLEG): container finished" podID="2531e40d-1556-4368-b4db-be4d6364097a" containerID="eeaffc987f79800bd38e541f91c1288a461801b5342d7ad84f0b56091c73475a" exitCode=0 Jan 28 15:35:54 crc kubenswrapper[4981]: I0128 15:35:54.542148 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gllft" event={"ID":"2531e40d-1556-4368-b4db-be4d6364097a","Type":"ContainerDied","Data":"eeaffc987f79800bd38e541f91c1288a461801b5342d7ad84f0b56091c73475a"} Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.001040 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gllft" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.117022 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2531e40d-1556-4368-b4db-be4d6364097a-ssh-key-openstack-edpm-ipam\") pod \"2531e40d-1556-4368-b4db-be4d6364097a\" (UID: \"2531e40d-1556-4368-b4db-be4d6364097a\") " Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.117216 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2531e40d-1556-4368-b4db-be4d6364097a-inventory-0\") pod \"2531e40d-1556-4368-b4db-be4d6364097a\" (UID: \"2531e40d-1556-4368-b4db-be4d6364097a\") " Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.117266 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx5tx\" (UniqueName: \"kubernetes.io/projected/2531e40d-1556-4368-b4db-be4d6364097a-kube-api-access-fx5tx\") pod \"2531e40d-1556-4368-b4db-be4d6364097a\" (UID: \"2531e40d-1556-4368-b4db-be4d6364097a\") " Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.123903 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2531e40d-1556-4368-b4db-be4d6364097a-kube-api-access-fx5tx" (OuterVolumeSpecName: "kube-api-access-fx5tx") pod "2531e40d-1556-4368-b4db-be4d6364097a" (UID: "2531e40d-1556-4368-b4db-be4d6364097a"). InnerVolumeSpecName "kube-api-access-fx5tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.145647 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2531e40d-1556-4368-b4db-be4d6364097a-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "2531e40d-1556-4368-b4db-be4d6364097a" (UID: "2531e40d-1556-4368-b4db-be4d6364097a"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.164002 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2531e40d-1556-4368-b4db-be4d6364097a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2531e40d-1556-4368-b4db-be4d6364097a" (UID: "2531e40d-1556-4368-b4db-be4d6364097a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.219044 4981 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2531e40d-1556-4368-b4db-be4d6364097a-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.219079 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx5tx\" (UniqueName: \"kubernetes.io/projected/2531e40d-1556-4368-b4db-be4d6364097a-kube-api-access-fx5tx\") on node \"crc\" DevicePath \"\"" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.219093 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2531e40d-1556-4368-b4db-be4d6364097a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.562144 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gllft" event={"ID":"2531e40d-1556-4368-b4db-be4d6364097a","Type":"ContainerDied","Data":"23fcfb0af4e274a9d5f22ad0cc9571a81b0b23e60c3ca1a4009c450fdf931cc3"} Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.562182 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23fcfb0af4e274a9d5f22ad0cc9571a81b0b23e60c3ca1a4009c450fdf931cc3" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.562259 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gllft" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.634077 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs"] Jan 28 15:35:56 crc kubenswrapper[4981]: E0128 15:35:56.634832 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2531e40d-1556-4368-b4db-be4d6364097a" containerName="ssh-known-hosts-edpm-deployment" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.634879 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="2531e40d-1556-4368-b4db-be4d6364097a" containerName="ssh-known-hosts-edpm-deployment" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.637073 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="2531e40d-1556-4368-b4db-be4d6364097a" containerName="ssh-known-hosts-edpm-deployment" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.637746 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.643724 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs"] Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.644236 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.649873 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.650159 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.659115 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.728365 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/605a9090-e629-463f-9119-7229674dccc7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz8bs\" (UID: \"605a9090-e629-463f-9119-7229674dccc7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.728524 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/605a9090-e629-463f-9119-7229674dccc7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz8bs\" (UID: \"605a9090-e629-463f-9119-7229674dccc7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.728627 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcczd\" (UniqueName: \"kubernetes.io/projected/605a9090-e629-463f-9119-7229674dccc7-kube-api-access-rcczd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz8bs\" (UID: \"605a9090-e629-463f-9119-7229674dccc7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.830994 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/605a9090-e629-463f-9119-7229674dccc7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz8bs\" (UID: \"605a9090-e629-463f-9119-7229674dccc7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.831062 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcczd\" (UniqueName: \"kubernetes.io/projected/605a9090-e629-463f-9119-7229674dccc7-kube-api-access-rcczd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz8bs\" (UID: \"605a9090-e629-463f-9119-7229674dccc7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.831277 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/605a9090-e629-463f-9119-7229674dccc7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz8bs\" (UID: \"605a9090-e629-463f-9119-7229674dccc7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.834641 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/605a9090-e629-463f-9119-7229674dccc7-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz8bs\" (UID: \"605a9090-e629-463f-9119-7229674dccc7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.840423 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/605a9090-e629-463f-9119-7229674dccc7-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz8bs\" (UID: \"605a9090-e629-463f-9119-7229674dccc7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.850410 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcczd\" (UniqueName: \"kubernetes.io/projected/605a9090-e629-463f-9119-7229674dccc7-kube-api-access-rcczd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-jz8bs\" (UID: \"605a9090-e629-463f-9119-7229674dccc7\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" Jan 28 15:35:56 crc kubenswrapper[4981]: I0128 15:35:56.964084 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" Jan 28 15:35:57 crc kubenswrapper[4981]: I0128 15:35:57.527278 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs"] Jan 28 15:35:57 crc kubenswrapper[4981]: I0128 15:35:57.571343 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" event={"ID":"605a9090-e629-463f-9119-7229674dccc7","Type":"ContainerStarted","Data":"affcf2405e727839413f4d4d9e27a1cde86ec4ed058ca0b02f711f21e79089bc"} Jan 28 15:35:58 crc kubenswrapper[4981]: I0128 15:35:58.582798 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" event={"ID":"605a9090-e629-463f-9119-7229674dccc7","Type":"ContainerStarted","Data":"e5923927a59644a12e9af41e55b336f8ee98411c35902b178cb18a019d8a7159"} Jan 28 15:35:58 crc kubenswrapper[4981]: I0128 15:35:58.617793 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" podStartSLOduration=2.1958822319999998 podStartE2EDuration="2.617770891s" podCreationTimestamp="2026-01-28 15:35:56 +0000 UTC" firstStartedPulling="2026-01-28 15:35:57.526611747 +0000 UTC m=+1968.978769998" lastFinishedPulling="2026-01-28 15:35:57.948500416 +0000 UTC m=+1969.400658657" observedRunningTime="2026-01-28 15:35:58.598019319 +0000 UTC m=+1970.050177600" watchObservedRunningTime="2026-01-28 15:35:58.617770891 +0000 UTC m=+1970.069929132" Jan 28 15:36:05 crc kubenswrapper[4981]: I0128 15:36:05.319603 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:36:05 crc kubenswrapper[4981]: E0128 15:36:05.320837 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:36:06 crc kubenswrapper[4981]: I0128 15:36:06.667405 4981 generic.go:334] "Generic (PLEG): container finished" podID="605a9090-e629-463f-9119-7229674dccc7" containerID="e5923927a59644a12e9af41e55b336f8ee98411c35902b178cb18a019d8a7159" exitCode=0 Jan 28 15:36:06 crc kubenswrapper[4981]: I0128 15:36:06.667564 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" event={"ID":"605a9090-e629-463f-9119-7229674dccc7","Type":"ContainerDied","Data":"e5923927a59644a12e9af41e55b336f8ee98411c35902b178cb18a019d8a7159"} Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.125801 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.251932 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/605a9090-e629-463f-9119-7229674dccc7-ssh-key-openstack-edpm-ipam\") pod \"605a9090-e629-463f-9119-7229674dccc7\" (UID: \"605a9090-e629-463f-9119-7229674dccc7\") " Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.252071 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/605a9090-e629-463f-9119-7229674dccc7-inventory\") pod \"605a9090-e629-463f-9119-7229674dccc7\" (UID: \"605a9090-e629-463f-9119-7229674dccc7\") " Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.252210 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcczd\" (UniqueName: \"kubernetes.io/projected/605a9090-e629-463f-9119-7229674dccc7-kube-api-access-rcczd\") pod \"605a9090-e629-463f-9119-7229674dccc7\" (UID: \"605a9090-e629-463f-9119-7229674dccc7\") " Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.261609 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/605a9090-e629-463f-9119-7229674dccc7-kube-api-access-rcczd" (OuterVolumeSpecName: "kube-api-access-rcczd") pod "605a9090-e629-463f-9119-7229674dccc7" (UID: "605a9090-e629-463f-9119-7229674dccc7"). InnerVolumeSpecName "kube-api-access-rcczd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.284429 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/605a9090-e629-463f-9119-7229674dccc7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "605a9090-e629-463f-9119-7229674dccc7" (UID: "605a9090-e629-463f-9119-7229674dccc7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.286179 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/605a9090-e629-463f-9119-7229674dccc7-inventory" (OuterVolumeSpecName: "inventory") pod "605a9090-e629-463f-9119-7229674dccc7" (UID: "605a9090-e629-463f-9119-7229674dccc7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.355299 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/605a9090-e629-463f-9119-7229674dccc7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.355359 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/605a9090-e629-463f-9119-7229674dccc7-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.355379 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcczd\" (UniqueName: \"kubernetes.io/projected/605a9090-e629-463f-9119-7229674dccc7-kube-api-access-rcczd\") on node \"crc\" DevicePath \"\"" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.691124 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" event={"ID":"605a9090-e629-463f-9119-7229674dccc7","Type":"ContainerDied","Data":"affcf2405e727839413f4d4d9e27a1cde86ec4ed058ca0b02f711f21e79089bc"} Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.691174 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="affcf2405e727839413f4d4d9e27a1cde86ec4ed058ca0b02f711f21e79089bc" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.691177 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-jz8bs" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.776030 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6"] Jan 28 15:36:08 crc kubenswrapper[4981]: E0128 15:36:08.776648 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="605a9090-e629-463f-9119-7229674dccc7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.776676 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="605a9090-e629-463f-9119-7229674dccc7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.776978 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="605a9090-e629-463f-9119-7229674dccc7" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.777864 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.780215 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.780850 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.781217 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.790507 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.796457 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6"] Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.864424 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9576\" (UniqueName: \"kubernetes.io/projected/5e256fd3-d946-40f7-a93d-906351bf73f8-kube-api-access-z9576\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6\" (UID: \"5e256fd3-d946-40f7-a93d-906351bf73f8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.864562 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e256fd3-d946-40f7-a93d-906351bf73f8-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6\" (UID: \"5e256fd3-d946-40f7-a93d-906351bf73f8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.864605 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e256fd3-d946-40f7-a93d-906351bf73f8-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6\" (UID: \"5e256fd3-d946-40f7-a93d-906351bf73f8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.966814 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9576\" (UniqueName: \"kubernetes.io/projected/5e256fd3-d946-40f7-a93d-906351bf73f8-kube-api-access-z9576\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6\" (UID: \"5e256fd3-d946-40f7-a93d-906351bf73f8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.966990 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e256fd3-d946-40f7-a93d-906351bf73f8-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6\" (UID: \"5e256fd3-d946-40f7-a93d-906351bf73f8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.967045 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e256fd3-d946-40f7-a93d-906351bf73f8-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6\" (UID: \"5e256fd3-d946-40f7-a93d-906351bf73f8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.971680 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e256fd3-d946-40f7-a93d-906351bf73f8-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6\" (UID: \"5e256fd3-d946-40f7-a93d-906351bf73f8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.972288 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e256fd3-d946-40f7-a93d-906351bf73f8-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6\" (UID: \"5e256fd3-d946-40f7-a93d-906351bf73f8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" Jan 28 15:36:08 crc kubenswrapper[4981]: I0128 15:36:08.988075 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9576\" (UniqueName: \"kubernetes.io/projected/5e256fd3-d946-40f7-a93d-906351bf73f8-kube-api-access-z9576\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6\" (UID: \"5e256fd3-d946-40f7-a93d-906351bf73f8\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" Jan 28 15:36:09 crc kubenswrapper[4981]: I0128 15:36:09.147358 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" Jan 28 15:36:09 crc kubenswrapper[4981]: I0128 15:36:09.705462 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6"] Jan 28 15:36:09 crc kubenswrapper[4981]: I0128 15:36:09.710453 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:36:10 crc kubenswrapper[4981]: I0128 15:36:10.162789 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:36:10 crc kubenswrapper[4981]: I0128 15:36:10.713560 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" event={"ID":"5e256fd3-d946-40f7-a93d-906351bf73f8","Type":"ContainerStarted","Data":"c941b96d414e1bd5043c0432095f9b330bc4d17bc968e74c5d79acddbc1f6573"} Jan 28 15:36:10 crc kubenswrapper[4981]: I0128 15:36:10.713614 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" event={"ID":"5e256fd3-d946-40f7-a93d-906351bf73f8","Type":"ContainerStarted","Data":"506346fc28451f500e9413cc73a0db1b5407a39ffef5295b3cb4c9d541888057"} Jan 28 15:36:10 crc kubenswrapper[4981]: I0128 15:36:10.727846 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" podStartSLOduration=2.280094578 podStartE2EDuration="2.727814622s" podCreationTimestamp="2026-01-28 15:36:08 +0000 UTC" firstStartedPulling="2026-01-28 15:36:09.710042024 +0000 UTC m=+1981.162200265" lastFinishedPulling="2026-01-28 15:36:10.157762028 +0000 UTC m=+1981.609920309" observedRunningTime="2026-01-28 15:36:10.726888537 +0000 UTC m=+1982.179046798" watchObservedRunningTime="2026-01-28 15:36:10.727814622 +0000 UTC m=+1982.179972903" Jan 28 15:36:19 crc kubenswrapper[4981]: I0128 15:36:19.330247 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:36:19 crc kubenswrapper[4981]: E0128 15:36:19.331352 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:36:19 crc kubenswrapper[4981]: I0128 15:36:19.809328 4981 generic.go:334] "Generic (PLEG): container finished" podID="5e256fd3-d946-40f7-a93d-906351bf73f8" containerID="c941b96d414e1bd5043c0432095f9b330bc4d17bc968e74c5d79acddbc1f6573" exitCode=0 Jan 28 15:36:19 crc kubenswrapper[4981]: I0128 15:36:19.809371 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" event={"ID":"5e256fd3-d946-40f7-a93d-906351bf73f8","Type":"ContainerDied","Data":"c941b96d414e1bd5043c0432095f9b330bc4d17bc968e74c5d79acddbc1f6573"} Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.277438 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.330075 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9576\" (UniqueName: \"kubernetes.io/projected/5e256fd3-d946-40f7-a93d-906351bf73f8-kube-api-access-z9576\") pod \"5e256fd3-d946-40f7-a93d-906351bf73f8\" (UID: \"5e256fd3-d946-40f7-a93d-906351bf73f8\") " Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.330233 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e256fd3-d946-40f7-a93d-906351bf73f8-inventory\") pod \"5e256fd3-d946-40f7-a93d-906351bf73f8\" (UID: \"5e256fd3-d946-40f7-a93d-906351bf73f8\") " Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.330280 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e256fd3-d946-40f7-a93d-906351bf73f8-ssh-key-openstack-edpm-ipam\") pod \"5e256fd3-d946-40f7-a93d-906351bf73f8\" (UID: \"5e256fd3-d946-40f7-a93d-906351bf73f8\") " Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.337833 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e256fd3-d946-40f7-a93d-906351bf73f8-kube-api-access-z9576" (OuterVolumeSpecName: "kube-api-access-z9576") pod "5e256fd3-d946-40f7-a93d-906351bf73f8" (UID: "5e256fd3-d946-40f7-a93d-906351bf73f8"). InnerVolumeSpecName "kube-api-access-z9576". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.357626 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e256fd3-d946-40f7-a93d-906351bf73f8-inventory" (OuterVolumeSpecName: "inventory") pod "5e256fd3-d946-40f7-a93d-906351bf73f8" (UID: "5e256fd3-d946-40f7-a93d-906351bf73f8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.368801 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e256fd3-d946-40f7-a93d-906351bf73f8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5e256fd3-d946-40f7-a93d-906351bf73f8" (UID: "5e256fd3-d946-40f7-a93d-906351bf73f8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.433178 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9576\" (UniqueName: \"kubernetes.io/projected/5e256fd3-d946-40f7-a93d-906351bf73f8-kube-api-access-z9576\") on node \"crc\" DevicePath \"\"" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.433226 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5e256fd3-d946-40f7-a93d-906351bf73f8-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.433238 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5e256fd3-d946-40f7-a93d-906351bf73f8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.838800 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" event={"ID":"5e256fd3-d946-40f7-a93d-906351bf73f8","Type":"ContainerDied","Data":"506346fc28451f500e9413cc73a0db1b5407a39ffef5295b3cb4c9d541888057"} Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.838844 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="506346fc28451f500e9413cc73a0db1b5407a39ffef5295b3cb4c9d541888057" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.838897 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.947560 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z"] Jan 28 15:36:21 crc kubenswrapper[4981]: E0128 15:36:21.947999 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e256fd3-d946-40f7-a93d-906351bf73f8" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.948025 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e256fd3-d946-40f7-a93d-906351bf73f8" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.948307 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e256fd3-d946-40f7-a93d-906351bf73f8" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.949023 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.952359 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.952973 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.953012 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.953761 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.953916 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.954404 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.955821 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 28 15:36:21 crc kubenswrapper[4981]: I0128 15:36:21.957277 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.027639 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z"] Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.044446 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.044529 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.044552 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.044585 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.044610 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.044626 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.044641 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.044658 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.044691 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.044742 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.044760 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.044793 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk5l2\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-kube-api-access-wk5l2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.044816 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.044837 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.146533 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.146599 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.146627 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.146652 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.146674 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.146723 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.146794 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.146819 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.146848 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk5l2\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-kube-api-access-wk5l2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.146891 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.146923 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.146954 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.147022 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.147056 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.152568 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.153297 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.153514 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.153704 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.154032 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.154306 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.155180 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.156504 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.156753 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.158636 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.158876 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.159146 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.162556 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.166277 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk5l2\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-kube-api-access-wk5l2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:22 crc kubenswrapper[4981]: I0128 15:36:22.278012 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:36:23 crc kubenswrapper[4981]: I0128 15:36:23.074036 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z"] Jan 28 15:36:23 crc kubenswrapper[4981]: W0128 15:36:23.079468 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3bc3ef2_85fc_4c54_b065_b7d5d889b6d4.slice/crio-6a3dba1f5730de46352ccf2d927f282d8c9293c0cec124a3d723468fb074b38a WatchSource:0}: Error finding container 6a3dba1f5730de46352ccf2d927f282d8c9293c0cec124a3d723468fb074b38a: Status 404 returned error can't find the container with id 6a3dba1f5730de46352ccf2d927f282d8c9293c0cec124a3d723468fb074b38a Jan 28 15:36:23 crc kubenswrapper[4981]: I0128 15:36:23.952897 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" event={"ID":"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4","Type":"ContainerStarted","Data":"6a3dba1f5730de46352ccf2d927f282d8c9293c0cec124a3d723468fb074b38a"} Jan 28 15:36:24 crc kubenswrapper[4981]: I0128 15:36:24.966855 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" event={"ID":"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4","Type":"ContainerStarted","Data":"a809d04624d27aa6956043ac8b82c4ec9439299226f99a2b8d79732273c3d4d8"} Jan 28 15:36:24 crc kubenswrapper[4981]: I0128 15:36:24.988820 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" podStartSLOduration=3.331316618 podStartE2EDuration="3.988801975s" podCreationTimestamp="2026-01-28 15:36:21 +0000 UTC" firstStartedPulling="2026-01-28 15:36:23.081438356 +0000 UTC m=+1994.533596597" lastFinishedPulling="2026-01-28 15:36:23.738923703 +0000 UTC m=+1995.191081954" observedRunningTime="2026-01-28 15:36:24.987994734 +0000 UTC m=+1996.440152995" watchObservedRunningTime="2026-01-28 15:36:24.988801975 +0000 UTC m=+1996.440960216" Jan 28 15:36:25 crc kubenswrapper[4981]: I0128 15:36:25.442273 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f768w"] Jan 28 15:36:25 crc kubenswrapper[4981]: I0128 15:36:25.444100 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:25 crc kubenswrapper[4981]: I0128 15:36:25.467088 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f768w"] Jan 28 15:36:25 crc kubenswrapper[4981]: I0128 15:36:25.519373 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-utilities\") pod \"redhat-operators-f768w\" (UID: \"dd818faa-72b3-49ad-bb74-d4ef2b6e772a\") " pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:25 crc kubenswrapper[4981]: I0128 15:36:25.519491 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-catalog-content\") pod \"redhat-operators-f768w\" (UID: \"dd818faa-72b3-49ad-bb74-d4ef2b6e772a\") " pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:25 crc kubenswrapper[4981]: I0128 15:36:25.519604 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knbs5\" (UniqueName: \"kubernetes.io/projected/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-kube-api-access-knbs5\") pod \"redhat-operators-f768w\" (UID: \"dd818faa-72b3-49ad-bb74-d4ef2b6e772a\") " pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:25 crc kubenswrapper[4981]: I0128 15:36:25.621745 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knbs5\" (UniqueName: \"kubernetes.io/projected/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-kube-api-access-knbs5\") pod \"redhat-operators-f768w\" (UID: \"dd818faa-72b3-49ad-bb74-d4ef2b6e772a\") " pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:25 crc kubenswrapper[4981]: I0128 15:36:25.621820 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-utilities\") pod \"redhat-operators-f768w\" (UID: \"dd818faa-72b3-49ad-bb74-d4ef2b6e772a\") " pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:25 crc kubenswrapper[4981]: I0128 15:36:25.621892 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-catalog-content\") pod \"redhat-operators-f768w\" (UID: \"dd818faa-72b3-49ad-bb74-d4ef2b6e772a\") " pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:25 crc kubenswrapper[4981]: I0128 15:36:25.622335 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-utilities\") pod \"redhat-operators-f768w\" (UID: \"dd818faa-72b3-49ad-bb74-d4ef2b6e772a\") " pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:25 crc kubenswrapper[4981]: I0128 15:36:25.622352 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-catalog-content\") pod \"redhat-operators-f768w\" (UID: \"dd818faa-72b3-49ad-bb74-d4ef2b6e772a\") " pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:25 crc kubenswrapper[4981]: I0128 15:36:25.642698 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knbs5\" (UniqueName: \"kubernetes.io/projected/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-kube-api-access-knbs5\") pod \"redhat-operators-f768w\" (UID: \"dd818faa-72b3-49ad-bb74-d4ef2b6e772a\") " pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:25 crc kubenswrapper[4981]: I0128 15:36:25.765969 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:26 crc kubenswrapper[4981]: I0128 15:36:26.211879 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f768w"] Jan 28 15:36:26 crc kubenswrapper[4981]: I0128 15:36:26.990630 4981 generic.go:334] "Generic (PLEG): container finished" podID="dd818faa-72b3-49ad-bb74-d4ef2b6e772a" containerID="a486382e1e240fcfeb87b3753f4f690f3a72eae149fea309ca6604acd1c3128a" exitCode=0 Jan 28 15:36:26 crc kubenswrapper[4981]: I0128 15:36:26.990687 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f768w" event={"ID":"dd818faa-72b3-49ad-bb74-d4ef2b6e772a","Type":"ContainerDied","Data":"a486382e1e240fcfeb87b3753f4f690f3a72eae149fea309ca6604acd1c3128a"} Jan 28 15:36:26 crc kubenswrapper[4981]: I0128 15:36:26.990720 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f768w" event={"ID":"dd818faa-72b3-49ad-bb74-d4ef2b6e772a","Type":"ContainerStarted","Data":"5f9890fca96cc9f7a8efea2403f4f1d8abc61a2b11e7df9e71e0b133a805981e"} Jan 28 15:36:29 crc kubenswrapper[4981]: I0128 15:36:29.012362 4981 generic.go:334] "Generic (PLEG): container finished" podID="dd818faa-72b3-49ad-bb74-d4ef2b6e772a" containerID="182830405fb6d71225b8916f9c64c9c04fb48bd95359dc7abfe8bca76fa60a77" exitCode=0 Jan 28 15:36:29 crc kubenswrapper[4981]: I0128 15:36:29.012452 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f768w" event={"ID":"dd818faa-72b3-49ad-bb74-d4ef2b6e772a","Type":"ContainerDied","Data":"182830405fb6d71225b8916f9c64c9c04fb48bd95359dc7abfe8bca76fa60a77"} Jan 28 15:36:30 crc kubenswrapper[4981]: I0128 15:36:30.024027 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f768w" event={"ID":"dd818faa-72b3-49ad-bb74-d4ef2b6e772a","Type":"ContainerStarted","Data":"81f01a8911dc1c230e3b8da3a336a7e87ebfdc96e15d9cb6f3cc9438444b7885"} Jan 28 15:36:30 crc kubenswrapper[4981]: I0128 15:36:30.040454 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f768w" podStartSLOduration=2.304450719 podStartE2EDuration="5.040431999s" podCreationTimestamp="2026-01-28 15:36:25 +0000 UTC" firstStartedPulling="2026-01-28 15:36:26.993082863 +0000 UTC m=+1998.445241104" lastFinishedPulling="2026-01-28 15:36:29.729064143 +0000 UTC m=+2001.181222384" observedRunningTime="2026-01-28 15:36:30.038934529 +0000 UTC m=+2001.491092770" watchObservedRunningTime="2026-01-28 15:36:30.040431999 +0000 UTC m=+2001.492590240" Jan 28 15:36:31 crc kubenswrapper[4981]: I0128 15:36:31.319314 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:36:33 crc kubenswrapper[4981]: I0128 15:36:33.048629 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"f5f8d79504f79824674cbd4398c03b26e266195df3ae9a5c78dabe1b22add3fe"} Jan 28 15:36:33 crc kubenswrapper[4981]: I0128 15:36:33.400760 4981 scope.go:117] "RemoveContainer" containerID="e28f2fd3c48141efba819eaa185c1dffd30e0418be9e506fedebec5eab1162aa" Jan 28 15:36:35 crc kubenswrapper[4981]: I0128 15:36:35.766458 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:35 crc kubenswrapper[4981]: I0128 15:36:35.767175 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:36 crc kubenswrapper[4981]: I0128 15:36:36.890626 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f768w" podUID="dd818faa-72b3-49ad-bb74-d4ef2b6e772a" containerName="registry-server" probeResult="failure" output=< Jan 28 15:36:36 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Jan 28 15:36:36 crc kubenswrapper[4981]: > Jan 28 15:36:45 crc kubenswrapper[4981]: I0128 15:36:45.834309 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:45 crc kubenswrapper[4981]: I0128 15:36:45.903476 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:46 crc kubenswrapper[4981]: I0128 15:36:46.079027 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f768w"] Jan 28 15:36:47 crc kubenswrapper[4981]: I0128 15:36:47.193901 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f768w" podUID="dd818faa-72b3-49ad-bb74-d4ef2b6e772a" containerName="registry-server" containerID="cri-o://81f01a8911dc1c230e3b8da3a336a7e87ebfdc96e15d9cb6f3cc9438444b7885" gracePeriod=2 Jan 28 15:36:47 crc kubenswrapper[4981]: I0128 15:36:47.685390 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:47 crc kubenswrapper[4981]: I0128 15:36:47.767959 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-catalog-content\") pod \"dd818faa-72b3-49ad-bb74-d4ef2b6e772a\" (UID: \"dd818faa-72b3-49ad-bb74-d4ef2b6e772a\") " Jan 28 15:36:47 crc kubenswrapper[4981]: I0128 15:36:47.768047 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knbs5\" (UniqueName: \"kubernetes.io/projected/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-kube-api-access-knbs5\") pod \"dd818faa-72b3-49ad-bb74-d4ef2b6e772a\" (UID: \"dd818faa-72b3-49ad-bb74-d4ef2b6e772a\") " Jan 28 15:36:47 crc kubenswrapper[4981]: I0128 15:36:47.768295 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-utilities\") pod \"dd818faa-72b3-49ad-bb74-d4ef2b6e772a\" (UID: \"dd818faa-72b3-49ad-bb74-d4ef2b6e772a\") " Jan 28 15:36:47 crc kubenswrapper[4981]: I0128 15:36:47.769091 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-utilities" (OuterVolumeSpecName: "utilities") pod "dd818faa-72b3-49ad-bb74-d4ef2b6e772a" (UID: "dd818faa-72b3-49ad-bb74-d4ef2b6e772a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:36:47 crc kubenswrapper[4981]: I0128 15:36:47.777028 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-kube-api-access-knbs5" (OuterVolumeSpecName: "kube-api-access-knbs5") pod "dd818faa-72b3-49ad-bb74-d4ef2b6e772a" (UID: "dd818faa-72b3-49ad-bb74-d4ef2b6e772a"). InnerVolumeSpecName "kube-api-access-knbs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:36:47 crc kubenswrapper[4981]: I0128 15:36:47.870995 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:36:47 crc kubenswrapper[4981]: I0128 15:36:47.871029 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knbs5\" (UniqueName: \"kubernetes.io/projected/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-kube-api-access-knbs5\") on node \"crc\" DevicePath \"\"" Jan 28 15:36:47 crc kubenswrapper[4981]: I0128 15:36:47.918012 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd818faa-72b3-49ad-bb74-d4ef2b6e772a" (UID: "dd818faa-72b3-49ad-bb74-d4ef2b6e772a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:36:47 crc kubenswrapper[4981]: I0128 15:36:47.975504 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd818faa-72b3-49ad-bb74-d4ef2b6e772a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:36:48 crc kubenswrapper[4981]: I0128 15:36:48.216537 4981 generic.go:334] "Generic (PLEG): container finished" podID="dd818faa-72b3-49ad-bb74-d4ef2b6e772a" containerID="81f01a8911dc1c230e3b8da3a336a7e87ebfdc96e15d9cb6f3cc9438444b7885" exitCode=0 Jan 28 15:36:48 crc kubenswrapper[4981]: I0128 15:36:48.216578 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f768w" Jan 28 15:36:48 crc kubenswrapper[4981]: I0128 15:36:48.216599 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f768w" event={"ID":"dd818faa-72b3-49ad-bb74-d4ef2b6e772a","Type":"ContainerDied","Data":"81f01a8911dc1c230e3b8da3a336a7e87ebfdc96e15d9cb6f3cc9438444b7885"} Jan 28 15:36:48 crc kubenswrapper[4981]: I0128 15:36:48.216645 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f768w" event={"ID":"dd818faa-72b3-49ad-bb74-d4ef2b6e772a","Type":"ContainerDied","Data":"5f9890fca96cc9f7a8efea2403f4f1d8abc61a2b11e7df9e71e0b133a805981e"} Jan 28 15:36:48 crc kubenswrapper[4981]: I0128 15:36:48.216667 4981 scope.go:117] "RemoveContainer" containerID="81f01a8911dc1c230e3b8da3a336a7e87ebfdc96e15d9cb6f3cc9438444b7885" Jan 28 15:36:48 crc kubenswrapper[4981]: I0128 15:36:48.244386 4981 scope.go:117] "RemoveContainer" containerID="182830405fb6d71225b8916f9c64c9c04fb48bd95359dc7abfe8bca76fa60a77" Jan 28 15:36:48 crc kubenswrapper[4981]: I0128 15:36:48.276132 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f768w"] Jan 28 15:36:48 crc kubenswrapper[4981]: I0128 15:36:48.281850 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f768w"] Jan 28 15:36:48 crc kubenswrapper[4981]: I0128 15:36:48.288296 4981 scope.go:117] "RemoveContainer" containerID="a486382e1e240fcfeb87b3753f4f690f3a72eae149fea309ca6604acd1c3128a" Jan 28 15:36:48 crc kubenswrapper[4981]: I0128 15:36:48.333914 4981 scope.go:117] "RemoveContainer" containerID="81f01a8911dc1c230e3b8da3a336a7e87ebfdc96e15d9cb6f3cc9438444b7885" Jan 28 15:36:48 crc kubenswrapper[4981]: E0128 15:36:48.334523 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81f01a8911dc1c230e3b8da3a336a7e87ebfdc96e15d9cb6f3cc9438444b7885\": container with ID starting with 81f01a8911dc1c230e3b8da3a336a7e87ebfdc96e15d9cb6f3cc9438444b7885 not found: ID does not exist" containerID="81f01a8911dc1c230e3b8da3a336a7e87ebfdc96e15d9cb6f3cc9438444b7885" Jan 28 15:36:48 crc kubenswrapper[4981]: I0128 15:36:48.334576 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81f01a8911dc1c230e3b8da3a336a7e87ebfdc96e15d9cb6f3cc9438444b7885"} err="failed to get container status \"81f01a8911dc1c230e3b8da3a336a7e87ebfdc96e15d9cb6f3cc9438444b7885\": rpc error: code = NotFound desc = could not find container \"81f01a8911dc1c230e3b8da3a336a7e87ebfdc96e15d9cb6f3cc9438444b7885\": container with ID starting with 81f01a8911dc1c230e3b8da3a336a7e87ebfdc96e15d9cb6f3cc9438444b7885 not found: ID does not exist" Jan 28 15:36:48 crc kubenswrapper[4981]: I0128 15:36:48.334612 4981 scope.go:117] "RemoveContainer" containerID="182830405fb6d71225b8916f9c64c9c04fb48bd95359dc7abfe8bca76fa60a77" Jan 28 15:36:48 crc kubenswrapper[4981]: E0128 15:36:48.335131 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"182830405fb6d71225b8916f9c64c9c04fb48bd95359dc7abfe8bca76fa60a77\": container with ID starting with 182830405fb6d71225b8916f9c64c9c04fb48bd95359dc7abfe8bca76fa60a77 not found: ID does not exist" containerID="182830405fb6d71225b8916f9c64c9c04fb48bd95359dc7abfe8bca76fa60a77" Jan 28 15:36:48 crc kubenswrapper[4981]: I0128 15:36:48.335171 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"182830405fb6d71225b8916f9c64c9c04fb48bd95359dc7abfe8bca76fa60a77"} err="failed to get container status \"182830405fb6d71225b8916f9c64c9c04fb48bd95359dc7abfe8bca76fa60a77\": rpc error: code = NotFound desc = could not find container \"182830405fb6d71225b8916f9c64c9c04fb48bd95359dc7abfe8bca76fa60a77\": container with ID starting with 182830405fb6d71225b8916f9c64c9c04fb48bd95359dc7abfe8bca76fa60a77 not found: ID does not exist" Jan 28 15:36:48 crc kubenswrapper[4981]: I0128 15:36:48.335330 4981 scope.go:117] "RemoveContainer" containerID="a486382e1e240fcfeb87b3753f4f690f3a72eae149fea309ca6604acd1c3128a" Jan 28 15:36:48 crc kubenswrapper[4981]: E0128 15:36:48.335734 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a486382e1e240fcfeb87b3753f4f690f3a72eae149fea309ca6604acd1c3128a\": container with ID starting with a486382e1e240fcfeb87b3753f4f690f3a72eae149fea309ca6604acd1c3128a not found: ID does not exist" containerID="a486382e1e240fcfeb87b3753f4f690f3a72eae149fea309ca6604acd1c3128a" Jan 28 15:36:48 crc kubenswrapper[4981]: I0128 15:36:48.335816 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a486382e1e240fcfeb87b3753f4f690f3a72eae149fea309ca6604acd1c3128a"} err="failed to get container status \"a486382e1e240fcfeb87b3753f4f690f3a72eae149fea309ca6604acd1c3128a\": rpc error: code = NotFound desc = could not find container \"a486382e1e240fcfeb87b3753f4f690f3a72eae149fea309ca6604acd1c3128a\": container with ID starting with a486382e1e240fcfeb87b3753f4f690f3a72eae149fea309ca6604acd1c3128a not found: ID does not exist" Jan 28 15:36:49 crc kubenswrapper[4981]: I0128 15:36:49.339328 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd818faa-72b3-49ad-bb74-d4ef2b6e772a" path="/var/lib/kubelet/pods/dd818faa-72b3-49ad-bb74-d4ef2b6e772a/volumes" Jan 28 15:37:00 crc kubenswrapper[4981]: I0128 15:37:00.343710 4981 generic.go:334] "Generic (PLEG): container finished" podID="d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" containerID="a809d04624d27aa6956043ac8b82c4ec9439299226f99a2b8d79732273c3d4d8" exitCode=0 Jan 28 15:37:00 crc kubenswrapper[4981]: I0128 15:37:00.343841 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" event={"ID":"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4","Type":"ContainerDied","Data":"a809d04624d27aa6956043ac8b82c4ec9439299226f99a2b8d79732273c3d4d8"} Jan 28 15:37:01 crc kubenswrapper[4981]: I0128 15:37:01.834227 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.004627 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-telemetry-combined-ca-bundle\") pod \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.004695 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-libvirt-combined-ca-bundle\") pod \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.004731 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-ssh-key-openstack-edpm-ipam\") pod \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.004865 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-nova-combined-ca-bundle\") pod \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.005698 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-bootstrap-combined-ca-bundle\") pod \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.005750 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-neutron-metadata-combined-ca-bundle\") pod \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.005791 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.005854 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.005878 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-repo-setup-combined-ca-bundle\") pod \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.005925 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.006031 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-ovn-default-certs-0\") pod \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.006085 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-ovn-combined-ca-bundle\") pod \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.006130 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk5l2\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-kube-api-access-wk5l2\") pod \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.006153 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-inventory\") pod \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\" (UID: \"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4\") " Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.013152 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" (UID: "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.013835 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" (UID: "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.013933 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" (UID: "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.014845 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" (UID: "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.017022 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" (UID: "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.018228 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" (UID: "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.018346 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" (UID: "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.018430 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" (UID: "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.022786 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" (UID: "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.025850 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" (UID: "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.028521 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" (UID: "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.033359 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-kube-api-access-wk5l2" (OuterVolumeSpecName: "kube-api-access-wk5l2") pod "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" (UID: "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4"). InnerVolumeSpecName "kube-api-access-wk5l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.048596 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-inventory" (OuterVolumeSpecName: "inventory") pod "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" (UID: "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.049777 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" (UID: "d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.108822 4981 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.109157 4981 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.109179 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk5l2\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-kube-api-access-wk5l2\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.109222 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.109241 4981 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.109258 4981 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.109274 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.109289 4981 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.109339 4981 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.109356 4981 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.109376 4981 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.109395 4981 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.109412 4981 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.109429 4981 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.361694 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" event={"ID":"d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4","Type":"ContainerDied","Data":"6a3dba1f5730de46352ccf2d927f282d8c9293c0cec124a3d723468fb074b38a"} Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.361737 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a3dba1f5730de46352ccf2d927f282d8c9293c0cec124a3d723468fb074b38a" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.361793 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.484572 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl"] Jan 28 15:37:02 crc kubenswrapper[4981]: E0128 15:37:02.485289 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd818faa-72b3-49ad-bb74-d4ef2b6e772a" containerName="extract-utilities" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.485381 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd818faa-72b3-49ad-bb74-d4ef2b6e772a" containerName="extract-utilities" Jan 28 15:37:02 crc kubenswrapper[4981]: E0128 15:37:02.485489 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd818faa-72b3-49ad-bb74-d4ef2b6e772a" containerName="extract-content" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.485553 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd818faa-72b3-49ad-bb74-d4ef2b6e772a" containerName="extract-content" Jan 28 15:37:02 crc kubenswrapper[4981]: E0128 15:37:02.485614 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.485682 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 15:37:02 crc kubenswrapper[4981]: E0128 15:37:02.485754 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd818faa-72b3-49ad-bb74-d4ef2b6e772a" containerName="registry-server" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.485812 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd818faa-72b3-49ad-bb74-d4ef2b6e772a" containerName="registry-server" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.486059 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd818faa-72b3-49ad-bb74-d4ef2b6e772a" containerName="registry-server" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.486150 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.486930 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.491914 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.492174 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.492452 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.492728 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.496579 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.520710 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl"] Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.620028 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhjsm\" (UniqueName: \"kubernetes.io/projected/66c75472-5f94-47b6-bed5-94306835c5fa-kube-api-access-fhjsm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mshfl\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.620313 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mshfl\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.620356 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mshfl\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.620447 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/66c75472-5f94-47b6-bed5-94306835c5fa-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mshfl\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.620680 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mshfl\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.722931 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhjsm\" (UniqueName: \"kubernetes.io/projected/66c75472-5f94-47b6-bed5-94306835c5fa-kube-api-access-fhjsm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mshfl\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.723177 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mshfl\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.723242 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mshfl\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.723280 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/66c75472-5f94-47b6-bed5-94306835c5fa-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mshfl\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.723375 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mshfl\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.724404 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/66c75472-5f94-47b6-bed5-94306835c5fa-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mshfl\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.728498 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mshfl\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.729326 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mshfl\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.729557 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mshfl\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.759293 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhjsm\" (UniqueName: \"kubernetes.io/projected/66c75472-5f94-47b6-bed5-94306835c5fa-kube-api-access-fhjsm\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-mshfl\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:02 crc kubenswrapper[4981]: I0128 15:37:02.806594 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:37:03 crc kubenswrapper[4981]: I0128 15:37:03.390044 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl"] Jan 28 15:37:04 crc kubenswrapper[4981]: I0128 15:37:04.388416 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" event={"ID":"66c75472-5f94-47b6-bed5-94306835c5fa","Type":"ContainerStarted","Data":"5b6eee47272bae7010bec90e07f3f5b8e89b1fe8c71f8a63cfeef724f0525d7e"} Jan 28 15:37:06 crc kubenswrapper[4981]: I0128 15:37:06.410746 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" event={"ID":"66c75472-5f94-47b6-bed5-94306835c5fa","Type":"ContainerStarted","Data":"11cbca7cfca97798ea67407c32754c99384f469691ed9af8661cd07695cf94d2"} Jan 28 15:38:09 crc kubenswrapper[4981]: I0128 15:38:09.017574 4981 generic.go:334] "Generic (PLEG): container finished" podID="66c75472-5f94-47b6-bed5-94306835c5fa" containerID="11cbca7cfca97798ea67407c32754c99384f469691ed9af8661cd07695cf94d2" exitCode=0 Jan 28 15:38:09 crc kubenswrapper[4981]: I0128 15:38:09.018089 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" event={"ID":"66c75472-5f94-47b6-bed5-94306835c5fa","Type":"ContainerDied","Data":"11cbca7cfca97798ea67407c32754c99384f469691ed9af8661cd07695cf94d2"} Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.456390 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.537387 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/66c75472-5f94-47b6-bed5-94306835c5fa-ovncontroller-config-0\") pod \"66c75472-5f94-47b6-bed5-94306835c5fa\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.537588 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-inventory\") pod \"66c75472-5f94-47b6-bed5-94306835c5fa\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.537621 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhjsm\" (UniqueName: \"kubernetes.io/projected/66c75472-5f94-47b6-bed5-94306835c5fa-kube-api-access-fhjsm\") pod \"66c75472-5f94-47b6-bed5-94306835c5fa\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.537653 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-ssh-key-openstack-edpm-ipam\") pod \"66c75472-5f94-47b6-bed5-94306835c5fa\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.537733 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-ovn-combined-ca-bundle\") pod \"66c75472-5f94-47b6-bed5-94306835c5fa\" (UID: \"66c75472-5f94-47b6-bed5-94306835c5fa\") " Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.546402 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66c75472-5f94-47b6-bed5-94306835c5fa-kube-api-access-fhjsm" (OuterVolumeSpecName: "kube-api-access-fhjsm") pod "66c75472-5f94-47b6-bed5-94306835c5fa" (UID: "66c75472-5f94-47b6-bed5-94306835c5fa"). InnerVolumeSpecName "kube-api-access-fhjsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.550143 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "66c75472-5f94-47b6-bed5-94306835c5fa" (UID: "66c75472-5f94-47b6-bed5-94306835c5fa"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.575927 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66c75472-5f94-47b6-bed5-94306835c5fa-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "66c75472-5f94-47b6-bed5-94306835c5fa" (UID: "66c75472-5f94-47b6-bed5-94306835c5fa"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.582432 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "66c75472-5f94-47b6-bed5-94306835c5fa" (UID: "66c75472-5f94-47b6-bed5-94306835c5fa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.614075 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-inventory" (OuterVolumeSpecName: "inventory") pod "66c75472-5f94-47b6-bed5-94306835c5fa" (UID: "66c75472-5f94-47b6-bed5-94306835c5fa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.640510 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.640549 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhjsm\" (UniqueName: \"kubernetes.io/projected/66c75472-5f94-47b6-bed5-94306835c5fa-kube-api-access-fhjsm\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.640566 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.640578 4981 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66c75472-5f94-47b6-bed5-94306835c5fa-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:10 crc kubenswrapper[4981]: I0128 15:38:10.640590 4981 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/66c75472-5f94-47b6-bed5-94306835c5fa-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.043624 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" event={"ID":"66c75472-5f94-47b6-bed5-94306835c5fa","Type":"ContainerDied","Data":"5b6eee47272bae7010bec90e07f3f5b8e89b1fe8c71f8a63cfeef724f0525d7e"} Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.043677 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b6eee47272bae7010bec90e07f3f5b8e89b1fe8c71f8a63cfeef724f0525d7e" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.043895 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-mshfl" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.135078 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp"] Jan 28 15:38:11 crc kubenswrapper[4981]: E0128 15:38:11.135508 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66c75472-5f94-47b6-bed5-94306835c5fa" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.135532 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="66c75472-5f94-47b6-bed5-94306835c5fa" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.135771 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="66c75472-5f94-47b6-bed5-94306835c5fa" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.137382 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.139387 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.139621 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.139826 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.139849 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.139955 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.140985 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.144375 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp"] Jan 28 15:38:11 crc kubenswrapper[4981]: E0128 15:38:11.210688 4981 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66c75472_5f94_47b6_bed5_94306835c5fa.slice/crio-5b6eee47272bae7010bec90e07f3f5b8e89b1fe8c71f8a63cfeef724f0525d7e\": RecentStats: unable to find data in memory cache]" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.259098 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.259184 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.259284 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.259709 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9nbc\" (UniqueName: \"kubernetes.io/projected/fa2e6c63-891a-4395-8270-942b5d5f168f-kube-api-access-m9nbc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.259906 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.259986 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.361998 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9nbc\" (UniqueName: \"kubernetes.io/projected/fa2e6c63-891a-4395-8270-942b5d5f168f-kube-api-access-m9nbc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.362221 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.362279 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.362344 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.362393 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.362487 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.367656 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.368165 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.369293 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.369479 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.374114 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.393310 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9nbc\" (UniqueName: \"kubernetes.io/projected/fa2e6c63-891a-4395-8270-942b5d5f168f-kube-api-access-m9nbc\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:11 crc kubenswrapper[4981]: I0128 15:38:11.454451 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:38:12 crc kubenswrapper[4981]: I0128 15:38:12.068515 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp"] Jan 28 15:38:12 crc kubenswrapper[4981]: W0128 15:38:12.072590 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa2e6c63_891a_4395_8270_942b5d5f168f.slice/crio-d55a8b9c1d3da67975bc60aeb0c93e08f769222afce53934dc43ed7d06fa4428 WatchSource:0}: Error finding container d55a8b9c1d3da67975bc60aeb0c93e08f769222afce53934dc43ed7d06fa4428: Status 404 returned error can't find the container with id d55a8b9c1d3da67975bc60aeb0c93e08f769222afce53934dc43ed7d06fa4428 Jan 28 15:38:13 crc kubenswrapper[4981]: I0128 15:38:13.064023 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" event={"ID":"fa2e6c63-891a-4395-8270-942b5d5f168f","Type":"ContainerStarted","Data":"0f29e061c2ea2e339594346d5003bfac1d65d8da3c88e9096da10765823f8de5"} Jan 28 15:38:13 crc kubenswrapper[4981]: I0128 15:38:13.064542 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" event={"ID":"fa2e6c63-891a-4395-8270-942b5d5f168f","Type":"ContainerStarted","Data":"d55a8b9c1d3da67975bc60aeb0c93e08f769222afce53934dc43ed7d06fa4428"} Jan 28 15:38:49 crc kubenswrapper[4981]: I0128 15:38:49.897528 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:38:49 crc kubenswrapper[4981]: I0128 15:38:49.898120 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:38:59 crc kubenswrapper[4981]: I0128 15:38:59.581376 4981 generic.go:334] "Generic (PLEG): container finished" podID="fa2e6c63-891a-4395-8270-942b5d5f168f" containerID="0f29e061c2ea2e339594346d5003bfac1d65d8da3c88e9096da10765823f8de5" exitCode=0 Jan 28 15:38:59 crc kubenswrapper[4981]: I0128 15:38:59.581454 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" event={"ID":"fa2e6c63-891a-4395-8270-942b5d5f168f","Type":"ContainerDied","Data":"0f29e061c2ea2e339594346d5003bfac1d65d8da3c88e9096da10765823f8de5"} Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.013369 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.063829 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-neutron-ovn-metadata-agent-neutron-config-0\") pod \"fa2e6c63-891a-4395-8270-942b5d5f168f\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.063877 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-neutron-metadata-combined-ca-bundle\") pod \"fa2e6c63-891a-4395-8270-942b5d5f168f\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.063918 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-inventory\") pod \"fa2e6c63-891a-4395-8270-942b5d5f168f\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.063996 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-ssh-key-openstack-edpm-ipam\") pod \"fa2e6c63-891a-4395-8270-942b5d5f168f\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.064071 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9nbc\" (UniqueName: \"kubernetes.io/projected/fa2e6c63-891a-4395-8270-942b5d5f168f-kube-api-access-m9nbc\") pod \"fa2e6c63-891a-4395-8270-942b5d5f168f\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.064115 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-nova-metadata-neutron-config-0\") pod \"fa2e6c63-891a-4395-8270-942b5d5f168f\" (UID: \"fa2e6c63-891a-4395-8270-942b5d5f168f\") " Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.069472 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "fa2e6c63-891a-4395-8270-942b5d5f168f" (UID: "fa2e6c63-891a-4395-8270-942b5d5f168f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.070343 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa2e6c63-891a-4395-8270-942b5d5f168f-kube-api-access-m9nbc" (OuterVolumeSpecName: "kube-api-access-m9nbc") pod "fa2e6c63-891a-4395-8270-942b5d5f168f" (UID: "fa2e6c63-891a-4395-8270-942b5d5f168f"). InnerVolumeSpecName "kube-api-access-m9nbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.090927 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "fa2e6c63-891a-4395-8270-942b5d5f168f" (UID: "fa2e6c63-891a-4395-8270-942b5d5f168f"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.100446 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-inventory" (OuterVolumeSpecName: "inventory") pod "fa2e6c63-891a-4395-8270-942b5d5f168f" (UID: "fa2e6c63-891a-4395-8270-942b5d5f168f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.104561 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fa2e6c63-891a-4395-8270-942b5d5f168f" (UID: "fa2e6c63-891a-4395-8270-942b5d5f168f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.117485 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "fa2e6c63-891a-4395-8270-942b5d5f168f" (UID: "fa2e6c63-891a-4395-8270-942b5d5f168f"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.165435 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.165472 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9nbc\" (UniqueName: \"kubernetes.io/projected/fa2e6c63-891a-4395-8270-942b5d5f168f-kube-api-access-m9nbc\") on node \"crc\" DevicePath \"\"" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.165485 4981 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.165500 4981 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.165515 4981 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.165530 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fa2e6c63-891a-4395-8270-942b5d5f168f-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.606467 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" event={"ID":"fa2e6c63-891a-4395-8270-942b5d5f168f","Type":"ContainerDied","Data":"d55a8b9c1d3da67975bc60aeb0c93e08f769222afce53934dc43ed7d06fa4428"} Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.606520 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.606529 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d55a8b9c1d3da67975bc60aeb0c93e08f769222afce53934dc43ed7d06fa4428" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.722912 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p"] Jan 28 15:39:01 crc kubenswrapper[4981]: E0128 15:39:01.723596 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa2e6c63-891a-4395-8270-942b5d5f168f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.723618 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa2e6c63-891a-4395-8270-942b5d5f168f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.723848 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa2e6c63-891a-4395-8270-942b5d5f168f" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.724583 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.732712 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.732775 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.732806 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.733023 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.733295 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.747350 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p"] Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.776940 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.777298 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.777470 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.777600 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.777663 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtd6x\" (UniqueName: \"kubernetes.io/projected/e78a3044-c335-4c2f-9fa6-314f2d40ef11-kube-api-access-rtd6x\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.879124 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.879493 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.879610 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.879757 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtd6x\" (UniqueName: \"kubernetes.io/projected/e78a3044-c335-4c2f-9fa6-314f2d40ef11-kube-api-access-rtd6x\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.879908 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.884247 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.884722 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.885017 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.885044 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:01 crc kubenswrapper[4981]: I0128 15:39:01.906059 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtd6x\" (UniqueName: \"kubernetes.io/projected/e78a3044-c335-4c2f-9fa6-314f2d40ef11-kube-api-access-rtd6x\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:02 crc kubenswrapper[4981]: I0128 15:39:02.052625 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:39:02 crc kubenswrapper[4981]: I0128 15:39:02.606236 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p"] Jan 28 15:39:03 crc kubenswrapper[4981]: I0128 15:39:03.624823 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" event={"ID":"e78a3044-c335-4c2f-9fa6-314f2d40ef11","Type":"ContainerStarted","Data":"8d27158a72bce127725f26859bec2055a975d55018834dc6f96052979a401730"} Jan 28 15:39:03 crc kubenswrapper[4981]: I0128 15:39:03.625102 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" event={"ID":"e78a3044-c335-4c2f-9fa6-314f2d40ef11","Type":"ContainerStarted","Data":"39c82dce179ae6fbdde256df248c02954612bfa133d4ea31752bc7954d08d36b"} Jan 28 15:39:03 crc kubenswrapper[4981]: I0128 15:39:03.643023 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" podStartSLOduration=2.215249451 podStartE2EDuration="2.643004552s" podCreationTimestamp="2026-01-28 15:39:01 +0000 UTC" firstStartedPulling="2026-01-28 15:39:02.611029685 +0000 UTC m=+2154.063187936" lastFinishedPulling="2026-01-28 15:39:03.038784796 +0000 UTC m=+2154.490943037" observedRunningTime="2026-01-28 15:39:03.638791461 +0000 UTC m=+2155.090949722" watchObservedRunningTime="2026-01-28 15:39:03.643004552 +0000 UTC m=+2155.095162793" Jan 28 15:39:19 crc kubenswrapper[4981]: I0128 15:39:19.897744 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:39:19 crc kubenswrapper[4981]: I0128 15:39:19.898491 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:39:38 crc kubenswrapper[4981]: I0128 15:39:38.531993 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hvdrf"] Jan 28 15:39:38 crc kubenswrapper[4981]: I0128 15:39:38.535018 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:38 crc kubenswrapper[4981]: I0128 15:39:38.560396 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvdrf"] Jan 28 15:39:38 crc kubenswrapper[4981]: I0128 15:39:38.693272 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-utilities\") pod \"redhat-marketplace-hvdrf\" (UID: \"842d4a1b-3ad0-41d2-b342-1c83e5ed5594\") " pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:38 crc kubenswrapper[4981]: I0128 15:39:38.693328 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q2g2\" (UniqueName: \"kubernetes.io/projected/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-kube-api-access-5q2g2\") pod \"redhat-marketplace-hvdrf\" (UID: \"842d4a1b-3ad0-41d2-b342-1c83e5ed5594\") " pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:38 crc kubenswrapper[4981]: I0128 15:39:38.693527 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-catalog-content\") pod \"redhat-marketplace-hvdrf\" (UID: \"842d4a1b-3ad0-41d2-b342-1c83e5ed5594\") " pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:38 crc kubenswrapper[4981]: I0128 15:39:38.795396 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-catalog-content\") pod \"redhat-marketplace-hvdrf\" (UID: \"842d4a1b-3ad0-41d2-b342-1c83e5ed5594\") " pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:38 crc kubenswrapper[4981]: I0128 15:39:38.795569 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-utilities\") pod \"redhat-marketplace-hvdrf\" (UID: \"842d4a1b-3ad0-41d2-b342-1c83e5ed5594\") " pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:38 crc kubenswrapper[4981]: I0128 15:39:38.795607 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q2g2\" (UniqueName: \"kubernetes.io/projected/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-kube-api-access-5q2g2\") pod \"redhat-marketplace-hvdrf\" (UID: \"842d4a1b-3ad0-41d2-b342-1c83e5ed5594\") " pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:38 crc kubenswrapper[4981]: I0128 15:39:38.796245 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-catalog-content\") pod \"redhat-marketplace-hvdrf\" (UID: \"842d4a1b-3ad0-41d2-b342-1c83e5ed5594\") " pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:38 crc kubenswrapper[4981]: I0128 15:39:38.796452 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-utilities\") pod \"redhat-marketplace-hvdrf\" (UID: \"842d4a1b-3ad0-41d2-b342-1c83e5ed5594\") " pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:38 crc kubenswrapper[4981]: I0128 15:39:38.820455 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q2g2\" (UniqueName: \"kubernetes.io/projected/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-kube-api-access-5q2g2\") pod \"redhat-marketplace-hvdrf\" (UID: \"842d4a1b-3ad0-41d2-b342-1c83e5ed5594\") " pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:38 crc kubenswrapper[4981]: I0128 15:39:38.862332 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:39 crc kubenswrapper[4981]: I0128 15:39:39.348855 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvdrf"] Jan 28 15:39:39 crc kubenswrapper[4981]: I0128 15:39:39.978903 4981 generic.go:334] "Generic (PLEG): container finished" podID="842d4a1b-3ad0-41d2-b342-1c83e5ed5594" containerID="0de7ed313b188eb3cd74fcea7f070627385133e2c825ed29651b24891a850773" exitCode=0 Jan 28 15:39:39 crc kubenswrapper[4981]: I0128 15:39:39.978947 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvdrf" event={"ID":"842d4a1b-3ad0-41d2-b342-1c83e5ed5594","Type":"ContainerDied","Data":"0de7ed313b188eb3cd74fcea7f070627385133e2c825ed29651b24891a850773"} Jan 28 15:39:39 crc kubenswrapper[4981]: I0128 15:39:39.978974 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvdrf" event={"ID":"842d4a1b-3ad0-41d2-b342-1c83e5ed5594","Type":"ContainerStarted","Data":"453e6fd75e35728fbbd49b1cfd6b1a665ceb8ed85e77af98d1fec145e9db91e1"} Jan 28 15:39:41 crc kubenswrapper[4981]: I0128 15:39:41.035136 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvdrf" event={"ID":"842d4a1b-3ad0-41d2-b342-1c83e5ed5594","Type":"ContainerStarted","Data":"d0ca52d7aa916f28d381bea65c6a35d44a3fc586ce6891b8660201effc04f5e2"} Jan 28 15:39:42 crc kubenswrapper[4981]: I0128 15:39:42.064299 4981 generic.go:334] "Generic (PLEG): container finished" podID="842d4a1b-3ad0-41d2-b342-1c83e5ed5594" containerID="d0ca52d7aa916f28d381bea65c6a35d44a3fc586ce6891b8660201effc04f5e2" exitCode=0 Jan 28 15:39:42 crc kubenswrapper[4981]: I0128 15:39:42.064413 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvdrf" event={"ID":"842d4a1b-3ad0-41d2-b342-1c83e5ed5594","Type":"ContainerDied","Data":"d0ca52d7aa916f28d381bea65c6a35d44a3fc586ce6891b8660201effc04f5e2"} Jan 28 15:39:42 crc kubenswrapper[4981]: I0128 15:39:42.065944 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvdrf" event={"ID":"842d4a1b-3ad0-41d2-b342-1c83e5ed5594","Type":"ContainerStarted","Data":"31a8dd2bf3baee00bf684aaf6f69772fdb5c354b5945b83871c3397589d5df4c"} Jan 28 15:39:42 crc kubenswrapper[4981]: I0128 15:39:42.089648 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hvdrf" podStartSLOduration=2.496501427 podStartE2EDuration="4.089623346s" podCreationTimestamp="2026-01-28 15:39:38 +0000 UTC" firstStartedPulling="2026-01-28 15:39:39.980655447 +0000 UTC m=+2191.432813688" lastFinishedPulling="2026-01-28 15:39:41.573777316 +0000 UTC m=+2193.025935607" observedRunningTime="2026-01-28 15:39:42.081624335 +0000 UTC m=+2193.533782586" watchObservedRunningTime="2026-01-28 15:39:42.089623346 +0000 UTC m=+2193.541781597" Jan 28 15:39:48 crc kubenswrapper[4981]: I0128 15:39:48.863387 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:48 crc kubenswrapper[4981]: I0128 15:39:48.864154 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:48 crc kubenswrapper[4981]: I0128 15:39:48.945811 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:49 crc kubenswrapper[4981]: I0128 15:39:49.191225 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:49 crc kubenswrapper[4981]: I0128 15:39:49.252054 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvdrf"] Jan 28 15:39:49 crc kubenswrapper[4981]: I0128 15:39:49.897840 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:39:49 crc kubenswrapper[4981]: I0128 15:39:49.898217 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:39:49 crc kubenswrapper[4981]: I0128 15:39:49.898273 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:39:49 crc kubenswrapper[4981]: I0128 15:39:49.898944 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5f8d79504f79824674cbd4398c03b26e266195df3ae9a5c78dabe1b22add3fe"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:39:49 crc kubenswrapper[4981]: I0128 15:39:49.899033 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://f5f8d79504f79824674cbd4398c03b26e266195df3ae9a5c78dabe1b22add3fe" gracePeriod=600 Jan 28 15:39:50 crc kubenswrapper[4981]: I0128 15:39:50.152036 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="f5f8d79504f79824674cbd4398c03b26e266195df3ae9a5c78dabe1b22add3fe" exitCode=0 Jan 28 15:39:50 crc kubenswrapper[4981]: I0128 15:39:50.152144 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"f5f8d79504f79824674cbd4398c03b26e266195df3ae9a5c78dabe1b22add3fe"} Jan 28 15:39:50 crc kubenswrapper[4981]: I0128 15:39:50.152217 4981 scope.go:117] "RemoveContainer" containerID="b4f431dc1ee1064a4d972a4ee2377048ab07f16bf159d152a8dc969e0ed811f5" Jan 28 15:39:51 crc kubenswrapper[4981]: I0128 15:39:51.173248 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f"} Jan 28 15:39:51 crc kubenswrapper[4981]: I0128 15:39:51.173546 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hvdrf" podUID="842d4a1b-3ad0-41d2-b342-1c83e5ed5594" containerName="registry-server" containerID="cri-o://31a8dd2bf3baee00bf684aaf6f69772fdb5c354b5945b83871c3397589d5df4c" gracePeriod=2 Jan 28 15:39:51 crc kubenswrapper[4981]: I0128 15:39:51.887716 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:51 crc kubenswrapper[4981]: I0128 15:39:51.975306 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-catalog-content\") pod \"842d4a1b-3ad0-41d2-b342-1c83e5ed5594\" (UID: \"842d4a1b-3ad0-41d2-b342-1c83e5ed5594\") " Jan 28 15:39:51 crc kubenswrapper[4981]: I0128 15:39:51.975461 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q2g2\" (UniqueName: \"kubernetes.io/projected/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-kube-api-access-5q2g2\") pod \"842d4a1b-3ad0-41d2-b342-1c83e5ed5594\" (UID: \"842d4a1b-3ad0-41d2-b342-1c83e5ed5594\") " Jan 28 15:39:51 crc kubenswrapper[4981]: I0128 15:39:51.975564 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-utilities\") pod \"842d4a1b-3ad0-41d2-b342-1c83e5ed5594\" (UID: \"842d4a1b-3ad0-41d2-b342-1c83e5ed5594\") " Jan 28 15:39:51 crc kubenswrapper[4981]: I0128 15:39:51.976905 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-utilities" (OuterVolumeSpecName: "utilities") pod "842d4a1b-3ad0-41d2-b342-1c83e5ed5594" (UID: "842d4a1b-3ad0-41d2-b342-1c83e5ed5594"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:39:51 crc kubenswrapper[4981]: I0128 15:39:51.983878 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-kube-api-access-5q2g2" (OuterVolumeSpecName: "kube-api-access-5q2g2") pod "842d4a1b-3ad0-41d2-b342-1c83e5ed5594" (UID: "842d4a1b-3ad0-41d2-b342-1c83e5ed5594"). InnerVolumeSpecName "kube-api-access-5q2g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.014623 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "842d4a1b-3ad0-41d2-b342-1c83e5ed5594" (UID: "842d4a1b-3ad0-41d2-b342-1c83e5ed5594"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.077291 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q2g2\" (UniqueName: \"kubernetes.io/projected/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-kube-api-access-5q2g2\") on node \"crc\" DevicePath \"\"" Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.077916 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.077984 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/842d4a1b-3ad0-41d2-b342-1c83e5ed5594-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.186927 4981 generic.go:334] "Generic (PLEG): container finished" podID="842d4a1b-3ad0-41d2-b342-1c83e5ed5594" containerID="31a8dd2bf3baee00bf684aaf6f69772fdb5c354b5945b83871c3397589d5df4c" exitCode=0 Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.186994 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hvdrf" Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.187022 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvdrf" event={"ID":"842d4a1b-3ad0-41d2-b342-1c83e5ed5594","Type":"ContainerDied","Data":"31a8dd2bf3baee00bf684aaf6f69772fdb5c354b5945b83871c3397589d5df4c"} Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.188017 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hvdrf" event={"ID":"842d4a1b-3ad0-41d2-b342-1c83e5ed5594","Type":"ContainerDied","Data":"453e6fd75e35728fbbd49b1cfd6b1a665ceb8ed85e77af98d1fec145e9db91e1"} Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.188046 4981 scope.go:117] "RemoveContainer" containerID="31a8dd2bf3baee00bf684aaf6f69772fdb5c354b5945b83871c3397589d5df4c" Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.225833 4981 scope.go:117] "RemoveContainer" containerID="d0ca52d7aa916f28d381bea65c6a35d44a3fc586ce6891b8660201effc04f5e2" Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.238265 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvdrf"] Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.247051 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hvdrf"] Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.259337 4981 scope.go:117] "RemoveContainer" containerID="0de7ed313b188eb3cd74fcea7f070627385133e2c825ed29651b24891a850773" Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.290668 4981 scope.go:117] "RemoveContainer" containerID="31a8dd2bf3baee00bf684aaf6f69772fdb5c354b5945b83871c3397589d5df4c" Jan 28 15:39:52 crc kubenswrapper[4981]: E0128 15:39:52.291119 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31a8dd2bf3baee00bf684aaf6f69772fdb5c354b5945b83871c3397589d5df4c\": container with ID starting with 31a8dd2bf3baee00bf684aaf6f69772fdb5c354b5945b83871c3397589d5df4c not found: ID does not exist" containerID="31a8dd2bf3baee00bf684aaf6f69772fdb5c354b5945b83871c3397589d5df4c" Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.291144 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31a8dd2bf3baee00bf684aaf6f69772fdb5c354b5945b83871c3397589d5df4c"} err="failed to get container status \"31a8dd2bf3baee00bf684aaf6f69772fdb5c354b5945b83871c3397589d5df4c\": rpc error: code = NotFound desc = could not find container \"31a8dd2bf3baee00bf684aaf6f69772fdb5c354b5945b83871c3397589d5df4c\": container with ID starting with 31a8dd2bf3baee00bf684aaf6f69772fdb5c354b5945b83871c3397589d5df4c not found: ID does not exist" Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.291163 4981 scope.go:117] "RemoveContainer" containerID="d0ca52d7aa916f28d381bea65c6a35d44a3fc586ce6891b8660201effc04f5e2" Jan 28 15:39:52 crc kubenswrapper[4981]: E0128 15:39:52.291683 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0ca52d7aa916f28d381bea65c6a35d44a3fc586ce6891b8660201effc04f5e2\": container with ID starting with d0ca52d7aa916f28d381bea65c6a35d44a3fc586ce6891b8660201effc04f5e2 not found: ID does not exist" containerID="d0ca52d7aa916f28d381bea65c6a35d44a3fc586ce6891b8660201effc04f5e2" Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.291709 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0ca52d7aa916f28d381bea65c6a35d44a3fc586ce6891b8660201effc04f5e2"} err="failed to get container status \"d0ca52d7aa916f28d381bea65c6a35d44a3fc586ce6891b8660201effc04f5e2\": rpc error: code = NotFound desc = could not find container \"d0ca52d7aa916f28d381bea65c6a35d44a3fc586ce6891b8660201effc04f5e2\": container with ID starting with d0ca52d7aa916f28d381bea65c6a35d44a3fc586ce6891b8660201effc04f5e2 not found: ID does not exist" Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.291726 4981 scope.go:117] "RemoveContainer" containerID="0de7ed313b188eb3cd74fcea7f070627385133e2c825ed29651b24891a850773" Jan 28 15:39:52 crc kubenswrapper[4981]: E0128 15:39:52.292048 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0de7ed313b188eb3cd74fcea7f070627385133e2c825ed29651b24891a850773\": container with ID starting with 0de7ed313b188eb3cd74fcea7f070627385133e2c825ed29651b24891a850773 not found: ID does not exist" containerID="0de7ed313b188eb3cd74fcea7f070627385133e2c825ed29651b24891a850773" Jan 28 15:39:52 crc kubenswrapper[4981]: I0128 15:39:52.292077 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0de7ed313b188eb3cd74fcea7f070627385133e2c825ed29651b24891a850773"} err="failed to get container status \"0de7ed313b188eb3cd74fcea7f070627385133e2c825ed29651b24891a850773\": rpc error: code = NotFound desc = could not find container \"0de7ed313b188eb3cd74fcea7f070627385133e2c825ed29651b24891a850773\": container with ID starting with 0de7ed313b188eb3cd74fcea7f070627385133e2c825ed29651b24891a850773 not found: ID does not exist" Jan 28 15:39:53 crc kubenswrapper[4981]: I0128 15:39:53.327921 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="842d4a1b-3ad0-41d2-b342-1c83e5ed5594" path="/var/lib/kubelet/pods/842d4a1b-3ad0-41d2-b342-1c83e5ed5594/volumes" Jan 28 15:40:33 crc kubenswrapper[4981]: I0128 15:40:33.790576 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-d79b67667-4jvlp" podUID="f3854c5d-2ac4-48d0-96df-a96b2fa5feb7" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 28 15:42:19 crc kubenswrapper[4981]: I0128 15:42:19.897933 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:42:19 crc kubenswrapper[4981]: I0128 15:42:19.898397 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.066875 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bk58r"] Jan 28 15:42:26 crc kubenswrapper[4981]: E0128 15:42:26.067828 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="842d4a1b-3ad0-41d2-b342-1c83e5ed5594" containerName="extract-utilities" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.067842 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="842d4a1b-3ad0-41d2-b342-1c83e5ed5594" containerName="extract-utilities" Jan 28 15:42:26 crc kubenswrapper[4981]: E0128 15:42:26.067853 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="842d4a1b-3ad0-41d2-b342-1c83e5ed5594" containerName="registry-server" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.067859 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="842d4a1b-3ad0-41d2-b342-1c83e5ed5594" containerName="registry-server" Jan 28 15:42:26 crc kubenswrapper[4981]: E0128 15:42:26.067876 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="842d4a1b-3ad0-41d2-b342-1c83e5ed5594" containerName="extract-content" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.067883 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="842d4a1b-3ad0-41d2-b342-1c83e5ed5594" containerName="extract-content" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.068070 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="842d4a1b-3ad0-41d2-b342-1c83e5ed5594" containerName="registry-server" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.069467 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.081771 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bk58r"] Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.166700 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftr8h\" (UniqueName: \"kubernetes.io/projected/783fdc94-5d9e-4876-b677-c5201ce4e4db-kube-api-access-ftr8h\") pod \"community-operators-bk58r\" (UID: \"783fdc94-5d9e-4876-b677-c5201ce4e4db\") " pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.166833 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783fdc94-5d9e-4876-b677-c5201ce4e4db-catalog-content\") pod \"community-operators-bk58r\" (UID: \"783fdc94-5d9e-4876-b677-c5201ce4e4db\") " pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.166883 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783fdc94-5d9e-4876-b677-c5201ce4e4db-utilities\") pod \"community-operators-bk58r\" (UID: \"783fdc94-5d9e-4876-b677-c5201ce4e4db\") " pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.268748 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftr8h\" (UniqueName: \"kubernetes.io/projected/783fdc94-5d9e-4876-b677-c5201ce4e4db-kube-api-access-ftr8h\") pod \"community-operators-bk58r\" (UID: \"783fdc94-5d9e-4876-b677-c5201ce4e4db\") " pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.268849 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783fdc94-5d9e-4876-b677-c5201ce4e4db-catalog-content\") pod \"community-operators-bk58r\" (UID: \"783fdc94-5d9e-4876-b677-c5201ce4e4db\") " pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.268884 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783fdc94-5d9e-4876-b677-c5201ce4e4db-utilities\") pod \"community-operators-bk58r\" (UID: \"783fdc94-5d9e-4876-b677-c5201ce4e4db\") " pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.269504 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783fdc94-5d9e-4876-b677-c5201ce4e4db-utilities\") pod \"community-operators-bk58r\" (UID: \"783fdc94-5d9e-4876-b677-c5201ce4e4db\") " pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.269562 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783fdc94-5d9e-4876-b677-c5201ce4e4db-catalog-content\") pod \"community-operators-bk58r\" (UID: \"783fdc94-5d9e-4876-b677-c5201ce4e4db\") " pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.306741 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftr8h\" (UniqueName: \"kubernetes.io/projected/783fdc94-5d9e-4876-b677-c5201ce4e4db-kube-api-access-ftr8h\") pod \"community-operators-bk58r\" (UID: \"783fdc94-5d9e-4876-b677-c5201ce4e4db\") " pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.398535 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:26 crc kubenswrapper[4981]: I0128 15:42:26.950811 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bk58r"] Jan 28 15:42:27 crc kubenswrapper[4981]: E0128 15:42:27.398134 4981 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod783fdc94_5d9e_4876_b677_c5201ce4e4db.slice/crio-462d5b1d499cd8e5e03e791d5eaa539376334f19448eac9efc4ebdfaf0aa351d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod783fdc94_5d9e_4876_b677_c5201ce4e4db.slice/crio-conmon-462d5b1d499cd8e5e03e791d5eaa539376334f19448eac9efc4ebdfaf0aa351d.scope\": RecentStats: unable to find data in memory cache]" Jan 28 15:42:27 crc kubenswrapper[4981]: I0128 15:42:27.705570 4981 generic.go:334] "Generic (PLEG): container finished" podID="783fdc94-5d9e-4876-b677-c5201ce4e4db" containerID="462d5b1d499cd8e5e03e791d5eaa539376334f19448eac9efc4ebdfaf0aa351d" exitCode=0 Jan 28 15:42:27 crc kubenswrapper[4981]: I0128 15:42:27.705643 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk58r" event={"ID":"783fdc94-5d9e-4876-b677-c5201ce4e4db","Type":"ContainerDied","Data":"462d5b1d499cd8e5e03e791d5eaa539376334f19448eac9efc4ebdfaf0aa351d"} Jan 28 15:42:27 crc kubenswrapper[4981]: I0128 15:42:27.705889 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk58r" event={"ID":"783fdc94-5d9e-4876-b677-c5201ce4e4db","Type":"ContainerStarted","Data":"818db3a2dc3bc1ceccdb859226c6fcfd971ab925ade28abaeac4b2c952c2d12f"} Jan 28 15:42:27 crc kubenswrapper[4981]: I0128 15:42:27.708808 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:42:30 crc kubenswrapper[4981]: I0128 15:42:30.731724 4981 generic.go:334] "Generic (PLEG): container finished" podID="783fdc94-5d9e-4876-b677-c5201ce4e4db" containerID="d6f4435d714ba2a1fcd83cd880132032392efa261bf3e09c809acea6a1b6b69f" exitCode=0 Jan 28 15:42:30 crc kubenswrapper[4981]: I0128 15:42:30.731804 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk58r" event={"ID":"783fdc94-5d9e-4876-b677-c5201ce4e4db","Type":"ContainerDied","Data":"d6f4435d714ba2a1fcd83cd880132032392efa261bf3e09c809acea6a1b6b69f"} Jan 28 15:42:32 crc kubenswrapper[4981]: I0128 15:42:32.752469 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk58r" event={"ID":"783fdc94-5d9e-4876-b677-c5201ce4e4db","Type":"ContainerStarted","Data":"72388e9f9d7741045f559b7eafc132fda1d4ab1c00c50f5f122042cf0f13accf"} Jan 28 15:42:32 crc kubenswrapper[4981]: I0128 15:42:32.773489 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bk58r" podStartSLOduration=2.9211439280000002 podStartE2EDuration="6.773462366s" podCreationTimestamp="2026-01-28 15:42:26 +0000 UTC" firstStartedPulling="2026-01-28 15:42:27.70844171 +0000 UTC m=+2359.160599991" lastFinishedPulling="2026-01-28 15:42:31.560760188 +0000 UTC m=+2363.012918429" observedRunningTime="2026-01-28 15:42:32.770591131 +0000 UTC m=+2364.222749372" watchObservedRunningTime="2026-01-28 15:42:32.773462366 +0000 UTC m=+2364.225620607" Jan 28 15:42:36 crc kubenswrapper[4981]: I0128 15:42:36.399006 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:36 crc kubenswrapper[4981]: I0128 15:42:36.399657 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:36 crc kubenswrapper[4981]: I0128 15:42:36.448445 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:36 crc kubenswrapper[4981]: I0128 15:42:36.836980 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:36 crc kubenswrapper[4981]: I0128 15:42:36.895710 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bk58r"] Jan 28 15:42:38 crc kubenswrapper[4981]: I0128 15:42:38.806178 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bk58r" podUID="783fdc94-5d9e-4876-b677-c5201ce4e4db" containerName="registry-server" containerID="cri-o://72388e9f9d7741045f559b7eafc132fda1d4ab1c00c50f5f122042cf0f13accf" gracePeriod=2 Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.297126 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.417157 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783fdc94-5d9e-4876-b677-c5201ce4e4db-catalog-content\") pod \"783fdc94-5d9e-4876-b677-c5201ce4e4db\" (UID: \"783fdc94-5d9e-4876-b677-c5201ce4e4db\") " Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.417242 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftr8h\" (UniqueName: \"kubernetes.io/projected/783fdc94-5d9e-4876-b677-c5201ce4e4db-kube-api-access-ftr8h\") pod \"783fdc94-5d9e-4876-b677-c5201ce4e4db\" (UID: \"783fdc94-5d9e-4876-b677-c5201ce4e4db\") " Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.417443 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783fdc94-5d9e-4876-b677-c5201ce4e4db-utilities\") pod \"783fdc94-5d9e-4876-b677-c5201ce4e4db\" (UID: \"783fdc94-5d9e-4876-b677-c5201ce4e4db\") " Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.419775 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/783fdc94-5d9e-4876-b677-c5201ce4e4db-utilities" (OuterVolumeSpecName: "utilities") pod "783fdc94-5d9e-4876-b677-c5201ce4e4db" (UID: "783fdc94-5d9e-4876-b677-c5201ce4e4db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.422453 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/783fdc94-5d9e-4876-b677-c5201ce4e4db-kube-api-access-ftr8h" (OuterVolumeSpecName: "kube-api-access-ftr8h") pod "783fdc94-5d9e-4876-b677-c5201ce4e4db" (UID: "783fdc94-5d9e-4876-b677-c5201ce4e4db"). InnerVolumeSpecName "kube-api-access-ftr8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.479746 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/783fdc94-5d9e-4876-b677-c5201ce4e4db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "783fdc94-5d9e-4876-b677-c5201ce4e4db" (UID: "783fdc94-5d9e-4876-b677-c5201ce4e4db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.520359 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/783fdc94-5d9e-4876-b677-c5201ce4e4db-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.520400 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/783fdc94-5d9e-4876-b677-c5201ce4e4db-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.520415 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftr8h\" (UniqueName: \"kubernetes.io/projected/783fdc94-5d9e-4876-b677-c5201ce4e4db-kube-api-access-ftr8h\") on node \"crc\" DevicePath \"\"" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.823433 4981 generic.go:334] "Generic (PLEG): container finished" podID="783fdc94-5d9e-4876-b677-c5201ce4e4db" containerID="72388e9f9d7741045f559b7eafc132fda1d4ab1c00c50f5f122042cf0f13accf" exitCode=0 Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.823500 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk58r" event={"ID":"783fdc94-5d9e-4876-b677-c5201ce4e4db","Type":"ContainerDied","Data":"72388e9f9d7741045f559b7eafc132fda1d4ab1c00c50f5f122042cf0f13accf"} Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.823538 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bk58r" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.823558 4981 scope.go:117] "RemoveContainer" containerID="72388e9f9d7741045f559b7eafc132fda1d4ab1c00c50f5f122042cf0f13accf" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.823542 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bk58r" event={"ID":"783fdc94-5d9e-4876-b677-c5201ce4e4db","Type":"ContainerDied","Data":"818db3a2dc3bc1ceccdb859226c6fcfd971ab925ade28abaeac4b2c952c2d12f"} Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.872262 4981 scope.go:117] "RemoveContainer" containerID="d6f4435d714ba2a1fcd83cd880132032392efa261bf3e09c809acea6a1b6b69f" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.879686 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bk58r"] Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.890388 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bk58r"] Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.910970 4981 scope.go:117] "RemoveContainer" containerID="462d5b1d499cd8e5e03e791d5eaa539376334f19448eac9efc4ebdfaf0aa351d" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.959986 4981 scope.go:117] "RemoveContainer" containerID="72388e9f9d7741045f559b7eafc132fda1d4ab1c00c50f5f122042cf0f13accf" Jan 28 15:42:39 crc kubenswrapper[4981]: E0128 15:42:39.960619 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72388e9f9d7741045f559b7eafc132fda1d4ab1c00c50f5f122042cf0f13accf\": container with ID starting with 72388e9f9d7741045f559b7eafc132fda1d4ab1c00c50f5f122042cf0f13accf not found: ID does not exist" containerID="72388e9f9d7741045f559b7eafc132fda1d4ab1c00c50f5f122042cf0f13accf" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.960668 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72388e9f9d7741045f559b7eafc132fda1d4ab1c00c50f5f122042cf0f13accf"} err="failed to get container status \"72388e9f9d7741045f559b7eafc132fda1d4ab1c00c50f5f122042cf0f13accf\": rpc error: code = NotFound desc = could not find container \"72388e9f9d7741045f559b7eafc132fda1d4ab1c00c50f5f122042cf0f13accf\": container with ID starting with 72388e9f9d7741045f559b7eafc132fda1d4ab1c00c50f5f122042cf0f13accf not found: ID does not exist" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.960703 4981 scope.go:117] "RemoveContainer" containerID="d6f4435d714ba2a1fcd83cd880132032392efa261bf3e09c809acea6a1b6b69f" Jan 28 15:42:39 crc kubenswrapper[4981]: E0128 15:42:39.961358 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f4435d714ba2a1fcd83cd880132032392efa261bf3e09c809acea6a1b6b69f\": container with ID starting with d6f4435d714ba2a1fcd83cd880132032392efa261bf3e09c809acea6a1b6b69f not found: ID does not exist" containerID="d6f4435d714ba2a1fcd83cd880132032392efa261bf3e09c809acea6a1b6b69f" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.961424 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f4435d714ba2a1fcd83cd880132032392efa261bf3e09c809acea6a1b6b69f"} err="failed to get container status \"d6f4435d714ba2a1fcd83cd880132032392efa261bf3e09c809acea6a1b6b69f\": rpc error: code = NotFound desc = could not find container \"d6f4435d714ba2a1fcd83cd880132032392efa261bf3e09c809acea6a1b6b69f\": container with ID starting with d6f4435d714ba2a1fcd83cd880132032392efa261bf3e09c809acea6a1b6b69f not found: ID does not exist" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.961447 4981 scope.go:117] "RemoveContainer" containerID="462d5b1d499cd8e5e03e791d5eaa539376334f19448eac9efc4ebdfaf0aa351d" Jan 28 15:42:39 crc kubenswrapper[4981]: E0128 15:42:39.961724 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"462d5b1d499cd8e5e03e791d5eaa539376334f19448eac9efc4ebdfaf0aa351d\": container with ID starting with 462d5b1d499cd8e5e03e791d5eaa539376334f19448eac9efc4ebdfaf0aa351d not found: ID does not exist" containerID="462d5b1d499cd8e5e03e791d5eaa539376334f19448eac9efc4ebdfaf0aa351d" Jan 28 15:42:39 crc kubenswrapper[4981]: I0128 15:42:39.961750 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"462d5b1d499cd8e5e03e791d5eaa539376334f19448eac9efc4ebdfaf0aa351d"} err="failed to get container status \"462d5b1d499cd8e5e03e791d5eaa539376334f19448eac9efc4ebdfaf0aa351d\": rpc error: code = NotFound desc = could not find container \"462d5b1d499cd8e5e03e791d5eaa539376334f19448eac9efc4ebdfaf0aa351d\": container with ID starting with 462d5b1d499cd8e5e03e791d5eaa539376334f19448eac9efc4ebdfaf0aa351d not found: ID does not exist" Jan 28 15:42:41 crc kubenswrapper[4981]: I0128 15:42:41.336230 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="783fdc94-5d9e-4876-b677-c5201ce4e4db" path="/var/lib/kubelet/pods/783fdc94-5d9e-4876-b677-c5201ce4e4db/volumes" Jan 28 15:42:49 crc kubenswrapper[4981]: I0128 15:42:49.897937 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:42:49 crc kubenswrapper[4981]: I0128 15:42:49.898547 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:43:19 crc kubenswrapper[4981]: I0128 15:43:19.898132 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:43:19 crc kubenswrapper[4981]: I0128 15:43:19.898875 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:43:19 crc kubenswrapper[4981]: I0128 15:43:19.898969 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:43:19 crc kubenswrapper[4981]: I0128 15:43:19.900256 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:43:19 crc kubenswrapper[4981]: I0128 15:43:19.900370 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" gracePeriod=600 Jan 28 15:43:20 crc kubenswrapper[4981]: E0128 15:43:20.129762 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:43:20 crc kubenswrapper[4981]: I0128 15:43:20.252406 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" exitCode=0 Jan 28 15:43:20 crc kubenswrapper[4981]: I0128 15:43:20.252447 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f"} Jan 28 15:43:20 crc kubenswrapper[4981]: I0128 15:43:20.252479 4981 scope.go:117] "RemoveContainer" containerID="f5f8d79504f79824674cbd4398c03b26e266195df3ae9a5c78dabe1b22add3fe" Jan 28 15:43:20 crc kubenswrapper[4981]: I0128 15:43:20.253398 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:43:20 crc kubenswrapper[4981]: E0128 15:43:20.253880 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:43:25 crc kubenswrapper[4981]: I0128 15:43:25.300155 4981 generic.go:334] "Generic (PLEG): container finished" podID="e78a3044-c335-4c2f-9fa6-314f2d40ef11" containerID="8d27158a72bce127725f26859bec2055a975d55018834dc6f96052979a401730" exitCode=0 Jan 28 15:43:25 crc kubenswrapper[4981]: I0128 15:43:25.300273 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" event={"ID":"e78a3044-c335-4c2f-9fa6-314f2d40ef11","Type":"ContainerDied","Data":"8d27158a72bce127725f26859bec2055a975d55018834dc6f96052979a401730"} Jan 28 15:43:26 crc kubenswrapper[4981]: I0128 15:43:26.860274 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:43:26 crc kubenswrapper[4981]: I0128 15:43:26.983715 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-libvirt-secret-0\") pod \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " Jan 28 15:43:26 crc kubenswrapper[4981]: I0128 15:43:26.983792 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtd6x\" (UniqueName: \"kubernetes.io/projected/e78a3044-c335-4c2f-9fa6-314f2d40ef11-kube-api-access-rtd6x\") pod \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " Jan 28 15:43:26 crc kubenswrapper[4981]: I0128 15:43:26.983967 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-libvirt-combined-ca-bundle\") pod \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " Jan 28 15:43:26 crc kubenswrapper[4981]: I0128 15:43:26.984018 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-ssh-key-openstack-edpm-ipam\") pod \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " Jan 28 15:43:26 crc kubenswrapper[4981]: I0128 15:43:26.984061 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-inventory\") pod \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\" (UID: \"e78a3044-c335-4c2f-9fa6-314f2d40ef11\") " Jan 28 15:43:26 crc kubenswrapper[4981]: I0128 15:43:26.994862 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e78a3044-c335-4c2f-9fa6-314f2d40ef11-kube-api-access-rtd6x" (OuterVolumeSpecName: "kube-api-access-rtd6x") pod "e78a3044-c335-4c2f-9fa6-314f2d40ef11" (UID: "e78a3044-c335-4c2f-9fa6-314f2d40ef11"). InnerVolumeSpecName "kube-api-access-rtd6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:43:26 crc kubenswrapper[4981]: I0128 15:43:26.998683 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e78a3044-c335-4c2f-9fa6-314f2d40ef11" (UID: "e78a3044-c335-4c2f-9fa6-314f2d40ef11"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.013966 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "e78a3044-c335-4c2f-9fa6-314f2d40ef11" (UID: "e78a3044-c335-4c2f-9fa6-314f2d40ef11"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.016090 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e78a3044-c335-4c2f-9fa6-314f2d40ef11" (UID: "e78a3044-c335-4c2f-9fa6-314f2d40ef11"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.028839 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-inventory" (OuterVolumeSpecName: "inventory") pod "e78a3044-c335-4c2f-9fa6-314f2d40ef11" (UID: "e78a3044-c335-4c2f-9fa6-314f2d40ef11"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.086736 4981 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.086766 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtd6x\" (UniqueName: \"kubernetes.io/projected/e78a3044-c335-4c2f-9fa6-314f2d40ef11-kube-api-access-rtd6x\") on node \"crc\" DevicePath \"\"" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.086777 4981 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.086785 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.086794 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e78a3044-c335-4c2f-9fa6-314f2d40ef11-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.318305 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.329219 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p" event={"ID":"e78a3044-c335-4c2f-9fa6-314f2d40ef11","Type":"ContainerDied","Data":"39c82dce179ae6fbdde256df248c02954612bfa133d4ea31752bc7954d08d36b"} Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.329267 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39c82dce179ae6fbdde256df248c02954612bfa133d4ea31752bc7954d08d36b" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.423350 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp"] Jan 28 15:43:27 crc kubenswrapper[4981]: E0128 15:43:27.423765 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783fdc94-5d9e-4876-b677-c5201ce4e4db" containerName="registry-server" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.423789 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="783fdc94-5d9e-4876-b677-c5201ce4e4db" containerName="registry-server" Jan 28 15:43:27 crc kubenswrapper[4981]: E0128 15:43:27.423826 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783fdc94-5d9e-4876-b677-c5201ce4e4db" containerName="extract-utilities" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.423834 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="783fdc94-5d9e-4876-b677-c5201ce4e4db" containerName="extract-utilities" Jan 28 15:43:27 crc kubenswrapper[4981]: E0128 15:43:27.423851 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783fdc94-5d9e-4876-b677-c5201ce4e4db" containerName="extract-content" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.423860 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="783fdc94-5d9e-4876-b677-c5201ce4e4db" containerName="extract-content" Jan 28 15:43:27 crc kubenswrapper[4981]: E0128 15:43:27.423869 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e78a3044-c335-4c2f-9fa6-314f2d40ef11" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.423880 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="e78a3044-c335-4c2f-9fa6-314f2d40ef11" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.424124 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="e78a3044-c335-4c2f-9fa6-314f2d40ef11" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.424162 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="783fdc94-5d9e-4876-b677-c5201ce4e4db" containerName="registry-server" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.424847 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.430174 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.430296 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.430201 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.430201 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.430503 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.430655 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.431055 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.435709 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp"] Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.596207 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.596266 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.596859 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhzwf\" (UniqueName: \"kubernetes.io/projected/d6e35d22-36ba-4506-a8bf-f0a7f539502a-kube-api-access-zhzwf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.597000 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.597146 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.597245 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.597454 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.597536 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.597665 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.698643 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.698923 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.699092 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.699222 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.699352 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.699516 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.699645 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.699785 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhzwf\" (UniqueName: \"kubernetes.io/projected/d6e35d22-36ba-4506-a8bf-f0a7f539502a-kube-api-access-zhzwf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.699885 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.700496 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.702984 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.703224 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.703509 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.703613 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.704109 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.704369 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.711296 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.716080 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhzwf\" (UniqueName: \"kubernetes.io/projected/d6e35d22-36ba-4506-a8bf-f0a7f539502a-kube-api-access-zhzwf\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gx6fp\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:27 crc kubenswrapper[4981]: I0128 15:43:27.759612 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:43:28 crc kubenswrapper[4981]: I0128 15:43:28.262012 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp"] Jan 28 15:43:28 crc kubenswrapper[4981]: I0128 15:43:28.326657 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" event={"ID":"d6e35d22-36ba-4506-a8bf-f0a7f539502a","Type":"ContainerStarted","Data":"a2d30797700b0067d6caef1e5742147fd5ffaf73699095cd3679c3950504b3bf"} Jan 28 15:43:29 crc kubenswrapper[4981]: I0128 15:43:29.355903 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" event={"ID":"d6e35d22-36ba-4506-a8bf-f0a7f539502a","Type":"ContainerStarted","Data":"7e71a2366c7e1a93fb3b666db15093a30f8b7ffdc5230dce6a37d0e26d0ba8bd"} Jan 28 15:43:29 crc kubenswrapper[4981]: I0128 15:43:29.384756 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" podStartSLOduration=1.866342062 podStartE2EDuration="2.384742354s" podCreationTimestamp="2026-01-28 15:43:27 +0000 UTC" firstStartedPulling="2026-01-28 15:43:28.267573761 +0000 UTC m=+2419.719731992" lastFinishedPulling="2026-01-28 15:43:28.785974043 +0000 UTC m=+2420.238132284" observedRunningTime="2026-01-28 15:43:29.377074274 +0000 UTC m=+2420.829232515" watchObservedRunningTime="2026-01-28 15:43:29.384742354 +0000 UTC m=+2420.836900595" Jan 28 15:43:34 crc kubenswrapper[4981]: I0128 15:43:34.318759 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:43:34 crc kubenswrapper[4981]: E0128 15:43:34.319482 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:43:45 crc kubenswrapper[4981]: I0128 15:43:45.318769 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:43:45 crc kubenswrapper[4981]: E0128 15:43:45.319629 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:43:59 crc kubenswrapper[4981]: I0128 15:43:59.325494 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:43:59 crc kubenswrapper[4981]: E0128 15:43:59.326267 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:44:11 crc kubenswrapper[4981]: I0128 15:44:11.319986 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:44:11 crc kubenswrapper[4981]: E0128 15:44:11.321013 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:44:12 crc kubenswrapper[4981]: I0128 15:44:12.955610 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7p98w"] Jan 28 15:44:12 crc kubenswrapper[4981]: I0128 15:44:12.963280 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:12 crc kubenswrapper[4981]: I0128 15:44:12.978386 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7p98w"] Jan 28 15:44:13 crc kubenswrapper[4981]: I0128 15:44:13.090741 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2xrj\" (UniqueName: \"kubernetes.io/projected/83018c87-50c9-45d5-9ac1-57510e611df4-kube-api-access-d2xrj\") pod \"certified-operators-7p98w\" (UID: \"83018c87-50c9-45d5-9ac1-57510e611df4\") " pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:13 crc kubenswrapper[4981]: I0128 15:44:13.090887 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83018c87-50c9-45d5-9ac1-57510e611df4-utilities\") pod \"certified-operators-7p98w\" (UID: \"83018c87-50c9-45d5-9ac1-57510e611df4\") " pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:13 crc kubenswrapper[4981]: I0128 15:44:13.090926 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83018c87-50c9-45d5-9ac1-57510e611df4-catalog-content\") pod \"certified-operators-7p98w\" (UID: \"83018c87-50c9-45d5-9ac1-57510e611df4\") " pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:13 crc kubenswrapper[4981]: I0128 15:44:13.193423 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83018c87-50c9-45d5-9ac1-57510e611df4-utilities\") pod \"certified-operators-7p98w\" (UID: \"83018c87-50c9-45d5-9ac1-57510e611df4\") " pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:13 crc kubenswrapper[4981]: I0128 15:44:13.193515 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83018c87-50c9-45d5-9ac1-57510e611df4-catalog-content\") pod \"certified-operators-7p98w\" (UID: \"83018c87-50c9-45d5-9ac1-57510e611df4\") " pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:13 crc kubenswrapper[4981]: I0128 15:44:13.193654 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2xrj\" (UniqueName: \"kubernetes.io/projected/83018c87-50c9-45d5-9ac1-57510e611df4-kube-api-access-d2xrj\") pod \"certified-operators-7p98w\" (UID: \"83018c87-50c9-45d5-9ac1-57510e611df4\") " pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:13 crc kubenswrapper[4981]: I0128 15:44:13.193988 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83018c87-50c9-45d5-9ac1-57510e611df4-utilities\") pod \"certified-operators-7p98w\" (UID: \"83018c87-50c9-45d5-9ac1-57510e611df4\") " pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:13 crc kubenswrapper[4981]: I0128 15:44:13.194092 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83018c87-50c9-45d5-9ac1-57510e611df4-catalog-content\") pod \"certified-operators-7p98w\" (UID: \"83018c87-50c9-45d5-9ac1-57510e611df4\") " pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:13 crc kubenswrapper[4981]: I0128 15:44:13.218086 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2xrj\" (UniqueName: \"kubernetes.io/projected/83018c87-50c9-45d5-9ac1-57510e611df4-kube-api-access-d2xrj\") pod \"certified-operators-7p98w\" (UID: \"83018c87-50c9-45d5-9ac1-57510e611df4\") " pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:13 crc kubenswrapper[4981]: I0128 15:44:13.291356 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:13 crc kubenswrapper[4981]: I0128 15:44:13.806754 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7p98w"] Jan 28 15:44:14 crc kubenswrapper[4981]: I0128 15:44:14.779038 4981 generic.go:334] "Generic (PLEG): container finished" podID="83018c87-50c9-45d5-9ac1-57510e611df4" containerID="5fb9ecbaf6fa46f8faa58a0f6927c14662e05066a456707593380499fbd18515" exitCode=0 Jan 28 15:44:14 crc kubenswrapper[4981]: I0128 15:44:14.779151 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7p98w" event={"ID":"83018c87-50c9-45d5-9ac1-57510e611df4","Type":"ContainerDied","Data":"5fb9ecbaf6fa46f8faa58a0f6927c14662e05066a456707593380499fbd18515"} Jan 28 15:44:14 crc kubenswrapper[4981]: I0128 15:44:14.779407 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7p98w" event={"ID":"83018c87-50c9-45d5-9ac1-57510e611df4","Type":"ContainerStarted","Data":"03bc86a5e85bf0c4832ff0a14843a487d53154751e44e9b6c8071e594215e4c5"} Jan 28 15:44:15 crc kubenswrapper[4981]: I0128 15:44:15.790454 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7p98w" event={"ID":"83018c87-50c9-45d5-9ac1-57510e611df4","Type":"ContainerStarted","Data":"31bd7cb75a6a69e43397b0f93bb75eb0e1f5702a641c0a665a544ecb52e5571a"} Jan 28 15:44:16 crc kubenswrapper[4981]: I0128 15:44:16.801654 4981 generic.go:334] "Generic (PLEG): container finished" podID="83018c87-50c9-45d5-9ac1-57510e611df4" containerID="31bd7cb75a6a69e43397b0f93bb75eb0e1f5702a641c0a665a544ecb52e5571a" exitCode=0 Jan 28 15:44:16 crc kubenswrapper[4981]: I0128 15:44:16.801694 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7p98w" event={"ID":"83018c87-50c9-45d5-9ac1-57510e611df4","Type":"ContainerDied","Data":"31bd7cb75a6a69e43397b0f93bb75eb0e1f5702a641c0a665a544ecb52e5571a"} Jan 28 15:44:17 crc kubenswrapper[4981]: I0128 15:44:17.812688 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7p98w" event={"ID":"83018c87-50c9-45d5-9ac1-57510e611df4","Type":"ContainerStarted","Data":"c17bd0f51ec46d6b08b46b00a963ad229a3d703adbf383f13f910d97950a6611"} Jan 28 15:44:17 crc kubenswrapper[4981]: I0128 15:44:17.835136 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7p98w" podStartSLOduration=3.416606614 podStartE2EDuration="5.835109761s" podCreationTimestamp="2026-01-28 15:44:12 +0000 UTC" firstStartedPulling="2026-01-28 15:44:14.781184723 +0000 UTC m=+2466.233343004" lastFinishedPulling="2026-01-28 15:44:17.19968791 +0000 UTC m=+2468.651846151" observedRunningTime="2026-01-28 15:44:17.833349765 +0000 UTC m=+2469.285508036" watchObservedRunningTime="2026-01-28 15:44:17.835109761 +0000 UTC m=+2469.287268012" Jan 28 15:44:23 crc kubenswrapper[4981]: I0128 15:44:23.291843 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:23 crc kubenswrapper[4981]: I0128 15:44:23.292181 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:23 crc kubenswrapper[4981]: I0128 15:44:23.319266 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:44:23 crc kubenswrapper[4981]: E0128 15:44:23.319555 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:44:23 crc kubenswrapper[4981]: I0128 15:44:23.348753 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:23 crc kubenswrapper[4981]: I0128 15:44:23.930282 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:25 crc kubenswrapper[4981]: I0128 15:44:25.926760 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7p98w"] Jan 28 15:44:26 crc kubenswrapper[4981]: I0128 15:44:26.898391 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7p98w" podUID="83018c87-50c9-45d5-9ac1-57510e611df4" containerName="registry-server" containerID="cri-o://c17bd0f51ec46d6b08b46b00a963ad229a3d703adbf383f13f910d97950a6611" gracePeriod=2 Jan 28 15:44:27 crc kubenswrapper[4981]: I0128 15:44:27.841904 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:27 crc kubenswrapper[4981]: I0128 15:44:27.908320 4981 generic.go:334] "Generic (PLEG): container finished" podID="83018c87-50c9-45d5-9ac1-57510e611df4" containerID="c17bd0f51ec46d6b08b46b00a963ad229a3d703adbf383f13f910d97950a6611" exitCode=0 Jan 28 15:44:27 crc kubenswrapper[4981]: I0128 15:44:27.908357 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7p98w" event={"ID":"83018c87-50c9-45d5-9ac1-57510e611df4","Type":"ContainerDied","Data":"c17bd0f51ec46d6b08b46b00a963ad229a3d703adbf383f13f910d97950a6611"} Jan 28 15:44:27 crc kubenswrapper[4981]: I0128 15:44:27.908397 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7p98w" Jan 28 15:44:27 crc kubenswrapper[4981]: I0128 15:44:27.908415 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7p98w" event={"ID":"83018c87-50c9-45d5-9ac1-57510e611df4","Type":"ContainerDied","Data":"03bc86a5e85bf0c4832ff0a14843a487d53154751e44e9b6c8071e594215e4c5"} Jan 28 15:44:27 crc kubenswrapper[4981]: I0128 15:44:27.908449 4981 scope.go:117] "RemoveContainer" containerID="c17bd0f51ec46d6b08b46b00a963ad229a3d703adbf383f13f910d97950a6611" Jan 28 15:44:27 crc kubenswrapper[4981]: I0128 15:44:27.932678 4981 scope.go:117] "RemoveContainer" containerID="31bd7cb75a6a69e43397b0f93bb75eb0e1f5702a641c0a665a544ecb52e5571a" Jan 28 15:44:27 crc kubenswrapper[4981]: I0128 15:44:27.955857 4981 scope.go:117] "RemoveContainer" containerID="5fb9ecbaf6fa46f8faa58a0f6927c14662e05066a456707593380499fbd18515" Jan 28 15:44:27 crc kubenswrapper[4981]: I0128 15:44:27.991705 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83018c87-50c9-45d5-9ac1-57510e611df4-catalog-content\") pod \"83018c87-50c9-45d5-9ac1-57510e611df4\" (UID: \"83018c87-50c9-45d5-9ac1-57510e611df4\") " Jan 28 15:44:27 crc kubenswrapper[4981]: I0128 15:44:27.991969 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2xrj\" (UniqueName: \"kubernetes.io/projected/83018c87-50c9-45d5-9ac1-57510e611df4-kube-api-access-d2xrj\") pod \"83018c87-50c9-45d5-9ac1-57510e611df4\" (UID: \"83018c87-50c9-45d5-9ac1-57510e611df4\") " Jan 28 15:44:27 crc kubenswrapper[4981]: I0128 15:44:27.992050 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83018c87-50c9-45d5-9ac1-57510e611df4-utilities\") pod \"83018c87-50c9-45d5-9ac1-57510e611df4\" (UID: \"83018c87-50c9-45d5-9ac1-57510e611df4\") " Jan 28 15:44:27 crc kubenswrapper[4981]: I0128 15:44:27.993032 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83018c87-50c9-45d5-9ac1-57510e611df4-utilities" (OuterVolumeSpecName: "utilities") pod "83018c87-50c9-45d5-9ac1-57510e611df4" (UID: "83018c87-50c9-45d5-9ac1-57510e611df4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:44:28 crc kubenswrapper[4981]: I0128 15:44:28.000783 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83018c87-50c9-45d5-9ac1-57510e611df4-kube-api-access-d2xrj" (OuterVolumeSpecName: "kube-api-access-d2xrj") pod "83018c87-50c9-45d5-9ac1-57510e611df4" (UID: "83018c87-50c9-45d5-9ac1-57510e611df4"). InnerVolumeSpecName "kube-api-access-d2xrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:44:28 crc kubenswrapper[4981]: I0128 15:44:28.019992 4981 scope.go:117] "RemoveContainer" containerID="c17bd0f51ec46d6b08b46b00a963ad229a3d703adbf383f13f910d97950a6611" Jan 28 15:44:28 crc kubenswrapper[4981]: E0128 15:44:28.020678 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c17bd0f51ec46d6b08b46b00a963ad229a3d703adbf383f13f910d97950a6611\": container with ID starting with c17bd0f51ec46d6b08b46b00a963ad229a3d703adbf383f13f910d97950a6611 not found: ID does not exist" containerID="c17bd0f51ec46d6b08b46b00a963ad229a3d703adbf383f13f910d97950a6611" Jan 28 15:44:28 crc kubenswrapper[4981]: I0128 15:44:28.020723 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c17bd0f51ec46d6b08b46b00a963ad229a3d703adbf383f13f910d97950a6611"} err="failed to get container status \"c17bd0f51ec46d6b08b46b00a963ad229a3d703adbf383f13f910d97950a6611\": rpc error: code = NotFound desc = could not find container \"c17bd0f51ec46d6b08b46b00a963ad229a3d703adbf383f13f910d97950a6611\": container with ID starting with c17bd0f51ec46d6b08b46b00a963ad229a3d703adbf383f13f910d97950a6611 not found: ID does not exist" Jan 28 15:44:28 crc kubenswrapper[4981]: I0128 15:44:28.020748 4981 scope.go:117] "RemoveContainer" containerID="31bd7cb75a6a69e43397b0f93bb75eb0e1f5702a641c0a665a544ecb52e5571a" Jan 28 15:44:28 crc kubenswrapper[4981]: E0128 15:44:28.021215 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31bd7cb75a6a69e43397b0f93bb75eb0e1f5702a641c0a665a544ecb52e5571a\": container with ID starting with 31bd7cb75a6a69e43397b0f93bb75eb0e1f5702a641c0a665a544ecb52e5571a not found: ID does not exist" containerID="31bd7cb75a6a69e43397b0f93bb75eb0e1f5702a641c0a665a544ecb52e5571a" Jan 28 15:44:28 crc kubenswrapper[4981]: I0128 15:44:28.021259 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31bd7cb75a6a69e43397b0f93bb75eb0e1f5702a641c0a665a544ecb52e5571a"} err="failed to get container status \"31bd7cb75a6a69e43397b0f93bb75eb0e1f5702a641c0a665a544ecb52e5571a\": rpc error: code = NotFound desc = could not find container \"31bd7cb75a6a69e43397b0f93bb75eb0e1f5702a641c0a665a544ecb52e5571a\": container with ID starting with 31bd7cb75a6a69e43397b0f93bb75eb0e1f5702a641c0a665a544ecb52e5571a not found: ID does not exist" Jan 28 15:44:28 crc kubenswrapper[4981]: I0128 15:44:28.021278 4981 scope.go:117] "RemoveContainer" containerID="5fb9ecbaf6fa46f8faa58a0f6927c14662e05066a456707593380499fbd18515" Jan 28 15:44:28 crc kubenswrapper[4981]: E0128 15:44:28.021706 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fb9ecbaf6fa46f8faa58a0f6927c14662e05066a456707593380499fbd18515\": container with ID starting with 5fb9ecbaf6fa46f8faa58a0f6927c14662e05066a456707593380499fbd18515 not found: ID does not exist" containerID="5fb9ecbaf6fa46f8faa58a0f6927c14662e05066a456707593380499fbd18515" Jan 28 15:44:28 crc kubenswrapper[4981]: I0128 15:44:28.021728 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fb9ecbaf6fa46f8faa58a0f6927c14662e05066a456707593380499fbd18515"} err="failed to get container status \"5fb9ecbaf6fa46f8faa58a0f6927c14662e05066a456707593380499fbd18515\": rpc error: code = NotFound desc = could not find container \"5fb9ecbaf6fa46f8faa58a0f6927c14662e05066a456707593380499fbd18515\": container with ID starting with 5fb9ecbaf6fa46f8faa58a0f6927c14662e05066a456707593380499fbd18515 not found: ID does not exist" Jan 28 15:44:28 crc kubenswrapper[4981]: I0128 15:44:28.047548 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83018c87-50c9-45d5-9ac1-57510e611df4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83018c87-50c9-45d5-9ac1-57510e611df4" (UID: "83018c87-50c9-45d5-9ac1-57510e611df4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:44:28 crc kubenswrapper[4981]: I0128 15:44:28.093887 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2xrj\" (UniqueName: \"kubernetes.io/projected/83018c87-50c9-45d5-9ac1-57510e611df4-kube-api-access-d2xrj\") on node \"crc\" DevicePath \"\"" Jan 28 15:44:28 crc kubenswrapper[4981]: I0128 15:44:28.093921 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83018c87-50c9-45d5-9ac1-57510e611df4-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:44:28 crc kubenswrapper[4981]: I0128 15:44:28.093932 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83018c87-50c9-45d5-9ac1-57510e611df4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:44:28 crc kubenswrapper[4981]: I0128 15:44:28.248128 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7p98w"] Jan 28 15:44:28 crc kubenswrapper[4981]: I0128 15:44:28.258659 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7p98w"] Jan 28 15:44:29 crc kubenswrapper[4981]: I0128 15:44:29.344341 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83018c87-50c9-45d5-9ac1-57510e611df4" path="/var/lib/kubelet/pods/83018c87-50c9-45d5-9ac1-57510e611df4/volumes" Jan 28 15:44:34 crc kubenswrapper[4981]: I0128 15:44:34.319255 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:44:34 crc kubenswrapper[4981]: E0128 15:44:34.319808 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:44:49 crc kubenswrapper[4981]: I0128 15:44:49.336601 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:44:49 crc kubenswrapper[4981]: E0128 15:44:49.337826 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.147784 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667"] Jan 28 15:45:00 crc kubenswrapper[4981]: E0128 15:45:00.148669 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83018c87-50c9-45d5-9ac1-57510e611df4" containerName="extract-utilities" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.148681 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="83018c87-50c9-45d5-9ac1-57510e611df4" containerName="extract-utilities" Jan 28 15:45:00 crc kubenswrapper[4981]: E0128 15:45:00.148704 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83018c87-50c9-45d5-9ac1-57510e611df4" containerName="registry-server" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.148711 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="83018c87-50c9-45d5-9ac1-57510e611df4" containerName="registry-server" Jan 28 15:45:00 crc kubenswrapper[4981]: E0128 15:45:00.148733 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83018c87-50c9-45d5-9ac1-57510e611df4" containerName="extract-content" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.148740 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="83018c87-50c9-45d5-9ac1-57510e611df4" containerName="extract-content" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.148918 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="83018c87-50c9-45d5-9ac1-57510e611df4" containerName="registry-server" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.149711 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.153072 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.153567 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.160440 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667"] Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.201643 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9mhl\" (UniqueName: \"kubernetes.io/projected/11d0d9c2-470c-4e30-a64c-f854d5c99762-kube-api-access-w9mhl\") pod \"collect-profiles-29493585-8d667\" (UID: \"11d0d9c2-470c-4e30-a64c-f854d5c99762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.201809 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11d0d9c2-470c-4e30-a64c-f854d5c99762-secret-volume\") pod \"collect-profiles-29493585-8d667\" (UID: \"11d0d9c2-470c-4e30-a64c-f854d5c99762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.202098 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11d0d9c2-470c-4e30-a64c-f854d5c99762-config-volume\") pod \"collect-profiles-29493585-8d667\" (UID: \"11d0d9c2-470c-4e30-a64c-f854d5c99762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.304248 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11d0d9c2-470c-4e30-a64c-f854d5c99762-config-volume\") pod \"collect-profiles-29493585-8d667\" (UID: \"11d0d9c2-470c-4e30-a64c-f854d5c99762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.304388 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9mhl\" (UniqueName: \"kubernetes.io/projected/11d0d9c2-470c-4e30-a64c-f854d5c99762-kube-api-access-w9mhl\") pod \"collect-profiles-29493585-8d667\" (UID: \"11d0d9c2-470c-4e30-a64c-f854d5c99762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.304467 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11d0d9c2-470c-4e30-a64c-f854d5c99762-secret-volume\") pod \"collect-profiles-29493585-8d667\" (UID: \"11d0d9c2-470c-4e30-a64c-f854d5c99762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.305357 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11d0d9c2-470c-4e30-a64c-f854d5c99762-config-volume\") pod \"collect-profiles-29493585-8d667\" (UID: \"11d0d9c2-470c-4e30-a64c-f854d5c99762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.320867 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11d0d9c2-470c-4e30-a64c-f854d5c99762-secret-volume\") pod \"collect-profiles-29493585-8d667\" (UID: \"11d0d9c2-470c-4e30-a64c-f854d5c99762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.321913 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9mhl\" (UniqueName: \"kubernetes.io/projected/11d0d9c2-470c-4e30-a64c-f854d5c99762-kube-api-access-w9mhl\") pod \"collect-profiles-29493585-8d667\" (UID: \"11d0d9c2-470c-4e30-a64c-f854d5c99762\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.481111 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" Jan 28 15:45:00 crc kubenswrapper[4981]: I0128 15:45:00.923425 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667"] Jan 28 15:45:01 crc kubenswrapper[4981]: I0128 15:45:01.222999 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" event={"ID":"11d0d9c2-470c-4e30-a64c-f854d5c99762","Type":"ContainerStarted","Data":"7d3df21754e4d795a53e7129d54763effd9eb3fd16e22a01f252d18f95696435"} Jan 28 15:45:02 crc kubenswrapper[4981]: I0128 15:45:02.242972 4981 generic.go:334] "Generic (PLEG): container finished" podID="11d0d9c2-470c-4e30-a64c-f854d5c99762" containerID="b9cdfe6bd0398807916e3207e9a87a7c447ab4d27510ce56be84387a7d56e115" exitCode=0 Jan 28 15:45:02 crc kubenswrapper[4981]: I0128 15:45:02.243133 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" event={"ID":"11d0d9c2-470c-4e30-a64c-f854d5c99762","Type":"ContainerDied","Data":"b9cdfe6bd0398807916e3207e9a87a7c447ab4d27510ce56be84387a7d56e115"} Jan 28 15:45:02 crc kubenswrapper[4981]: I0128 15:45:02.318892 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:45:02 crc kubenswrapper[4981]: E0128 15:45:02.319156 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:45:03 crc kubenswrapper[4981]: I0128 15:45:03.577555 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" Jan 28 15:45:03 crc kubenswrapper[4981]: I0128 15:45:03.699248 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9mhl\" (UniqueName: \"kubernetes.io/projected/11d0d9c2-470c-4e30-a64c-f854d5c99762-kube-api-access-w9mhl\") pod \"11d0d9c2-470c-4e30-a64c-f854d5c99762\" (UID: \"11d0d9c2-470c-4e30-a64c-f854d5c99762\") " Jan 28 15:45:03 crc kubenswrapper[4981]: I0128 15:45:03.699296 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11d0d9c2-470c-4e30-a64c-f854d5c99762-secret-volume\") pod \"11d0d9c2-470c-4e30-a64c-f854d5c99762\" (UID: \"11d0d9c2-470c-4e30-a64c-f854d5c99762\") " Jan 28 15:45:03 crc kubenswrapper[4981]: I0128 15:45:03.699423 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11d0d9c2-470c-4e30-a64c-f854d5c99762-config-volume\") pod \"11d0d9c2-470c-4e30-a64c-f854d5c99762\" (UID: \"11d0d9c2-470c-4e30-a64c-f854d5c99762\") " Jan 28 15:45:03 crc kubenswrapper[4981]: I0128 15:45:03.700319 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11d0d9c2-470c-4e30-a64c-f854d5c99762-config-volume" (OuterVolumeSpecName: "config-volume") pod "11d0d9c2-470c-4e30-a64c-f854d5c99762" (UID: "11d0d9c2-470c-4e30-a64c-f854d5c99762"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:03 crc kubenswrapper[4981]: I0128 15:45:03.708723 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11d0d9c2-470c-4e30-a64c-f854d5c99762-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "11d0d9c2-470c-4e30-a64c-f854d5c99762" (UID: "11d0d9c2-470c-4e30-a64c-f854d5c99762"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:03 crc kubenswrapper[4981]: I0128 15:45:03.713447 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11d0d9c2-470c-4e30-a64c-f854d5c99762-kube-api-access-w9mhl" (OuterVolumeSpecName: "kube-api-access-w9mhl") pod "11d0d9c2-470c-4e30-a64c-f854d5c99762" (UID: "11d0d9c2-470c-4e30-a64c-f854d5c99762"). InnerVolumeSpecName "kube-api-access-w9mhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:03 crc kubenswrapper[4981]: I0128 15:45:03.802004 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9mhl\" (UniqueName: \"kubernetes.io/projected/11d0d9c2-470c-4e30-a64c-f854d5c99762-kube-api-access-w9mhl\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:03 crc kubenswrapper[4981]: I0128 15:45:03.802067 4981 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/11d0d9c2-470c-4e30-a64c-f854d5c99762-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:03 crc kubenswrapper[4981]: I0128 15:45:03.802085 4981 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11d0d9c2-470c-4e30-a64c-f854d5c99762-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:04 crc kubenswrapper[4981]: I0128 15:45:04.261539 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" event={"ID":"11d0d9c2-470c-4e30-a64c-f854d5c99762","Type":"ContainerDied","Data":"7d3df21754e4d795a53e7129d54763effd9eb3fd16e22a01f252d18f95696435"} Jan 28 15:45:04 crc kubenswrapper[4981]: I0128 15:45:04.261586 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d3df21754e4d795a53e7129d54763effd9eb3fd16e22a01f252d18f95696435" Jan 28 15:45:04 crc kubenswrapper[4981]: I0128 15:45:04.261642 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493585-8d667" Jan 28 15:45:04 crc kubenswrapper[4981]: I0128 15:45:04.654232 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf"] Jan 28 15:45:04 crc kubenswrapper[4981]: I0128 15:45:04.685580 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493540-78fzf"] Jan 28 15:45:05 crc kubenswrapper[4981]: I0128 15:45:05.333557 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed4c6fed-3e17-40a9-b844-adc144028848" path="/var/lib/kubelet/pods/ed4c6fed-3e17-40a9-b844-adc144028848/volumes" Jan 28 15:45:14 crc kubenswrapper[4981]: I0128 15:45:14.318912 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:45:14 crc kubenswrapper[4981]: E0128 15:45:14.320080 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:45:29 crc kubenswrapper[4981]: I0128 15:45:29.327260 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:45:29 crc kubenswrapper[4981]: E0128 15:45:29.328307 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:45:33 crc kubenswrapper[4981]: I0128 15:45:33.677694 4981 scope.go:117] "RemoveContainer" containerID="f997bcef97eadacc863d035087f4cbcb25c94fc21b494676a224659f1514b8a5" Jan 28 15:45:36 crc kubenswrapper[4981]: I0128 15:45:36.550177 4981 generic.go:334] "Generic (PLEG): container finished" podID="d6e35d22-36ba-4506-a8bf-f0a7f539502a" containerID="7e71a2366c7e1a93fb3b666db15093a30f8b7ffdc5230dce6a37d0e26d0ba8bd" exitCode=0 Jan 28 15:45:36 crc kubenswrapper[4981]: I0128 15:45:36.550436 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" event={"ID":"d6e35d22-36ba-4506-a8bf-f0a7f539502a","Type":"ContainerDied","Data":"7e71a2366c7e1a93fb3b666db15093a30f8b7ffdc5230dce6a37d0e26d0ba8bd"} Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.102448 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.196474 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-cell1-compute-config-1\") pod \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.196573 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-extra-config-0\") pod \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.196651 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhzwf\" (UniqueName: \"kubernetes.io/projected/d6e35d22-36ba-4506-a8bf-f0a7f539502a-kube-api-access-zhzwf\") pod \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.196679 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-migration-ssh-key-1\") pod \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.196724 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-inventory\") pod \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.196779 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-ssh-key-openstack-edpm-ipam\") pod \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.196803 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-cell1-compute-config-0\") pod \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.196868 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-migration-ssh-key-0\") pod \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.196918 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-combined-ca-bundle\") pod \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\" (UID: \"d6e35d22-36ba-4506-a8bf-f0a7f539502a\") " Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.205066 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d6e35d22-36ba-4506-a8bf-f0a7f539502a" (UID: "d6e35d22-36ba-4506-a8bf-f0a7f539502a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.217472 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6e35d22-36ba-4506-a8bf-f0a7f539502a-kube-api-access-zhzwf" (OuterVolumeSpecName: "kube-api-access-zhzwf") pod "d6e35d22-36ba-4506-a8bf-f0a7f539502a" (UID: "d6e35d22-36ba-4506-a8bf-f0a7f539502a"). InnerVolumeSpecName "kube-api-access-zhzwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.230421 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "d6e35d22-36ba-4506-a8bf-f0a7f539502a" (UID: "d6e35d22-36ba-4506-a8bf-f0a7f539502a"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.232144 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d6e35d22-36ba-4506-a8bf-f0a7f539502a" (UID: "d6e35d22-36ba-4506-a8bf-f0a7f539502a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.234333 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "d6e35d22-36ba-4506-a8bf-f0a7f539502a" (UID: "d6e35d22-36ba-4506-a8bf-f0a7f539502a"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.237027 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-inventory" (OuterVolumeSpecName: "inventory") pod "d6e35d22-36ba-4506-a8bf-f0a7f539502a" (UID: "d6e35d22-36ba-4506-a8bf-f0a7f539502a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.240049 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "d6e35d22-36ba-4506-a8bf-f0a7f539502a" (UID: "d6e35d22-36ba-4506-a8bf-f0a7f539502a"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.246411 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "d6e35d22-36ba-4506-a8bf-f0a7f539502a" (UID: "d6e35d22-36ba-4506-a8bf-f0a7f539502a"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.249483 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "d6e35d22-36ba-4506-a8bf-f0a7f539502a" (UID: "d6e35d22-36ba-4506-a8bf-f0a7f539502a"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.299397 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhzwf\" (UniqueName: \"kubernetes.io/projected/d6e35d22-36ba-4506-a8bf-f0a7f539502a-kube-api-access-zhzwf\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.299452 4981 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.299463 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.299478 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.299488 4981 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.299499 4981 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.299511 4981 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.299523 4981 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.299535 4981 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d6e35d22-36ba-4506-a8bf-f0a7f539502a-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.575738 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" event={"ID":"d6e35d22-36ba-4506-a8bf-f0a7f539502a","Type":"ContainerDied","Data":"a2d30797700b0067d6caef1e5742147fd5ffaf73699095cd3679c3950504b3bf"} Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.576035 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2d30797700b0067d6caef1e5742147fd5ffaf73699095cd3679c3950504b3bf" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.575803 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gx6fp" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.684430 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj"] Jan 28 15:45:38 crc kubenswrapper[4981]: E0128 15:45:38.684807 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6e35d22-36ba-4506-a8bf-f0a7f539502a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.684823 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6e35d22-36ba-4506-a8bf-f0a7f539502a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 15:45:38 crc kubenswrapper[4981]: E0128 15:45:38.684850 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d0d9c2-470c-4e30-a64c-f854d5c99762" containerName="collect-profiles" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.684857 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d0d9c2-470c-4e30-a64c-f854d5c99762" containerName="collect-profiles" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.685031 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d0d9c2-470c-4e30-a64c-f854d5c99762" containerName="collect-profiles" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.685051 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6e35d22-36ba-4506-a8bf-f0a7f539502a" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.685836 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.692750 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.692831 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.692909 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.692980 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.693287 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-pz626" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.707650 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj"] Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.818667 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.818734 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tll2t\" (UniqueName: \"kubernetes.io/projected/b2a912b2-d6a7-4cd5-8cba-66b942182410-kube-api-access-tll2t\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.818798 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.818999 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.819127 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.819169 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.819285 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.920701 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.920784 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.920811 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.920854 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.920915 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.920946 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tll2t\" (UniqueName: \"kubernetes.io/projected/b2a912b2-d6a7-4cd5-8cba-66b942182410-kube-api-access-tll2t\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.920994 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.925880 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.927569 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.927595 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.928071 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.928262 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.928511 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:38 crc kubenswrapper[4981]: I0128 15:45:38.948448 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tll2t\" (UniqueName: \"kubernetes.io/projected/b2a912b2-d6a7-4cd5-8cba-66b942182410-kube-api-access-tll2t\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:39 crc kubenswrapper[4981]: I0128 15:45:39.001934 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:45:39 crc kubenswrapper[4981]: I0128 15:45:39.575895 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj"] Jan 28 15:45:40 crc kubenswrapper[4981]: I0128 15:45:40.599515 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" event={"ID":"b2a912b2-d6a7-4cd5-8cba-66b942182410","Type":"ContainerStarted","Data":"daa503c390ca91ace824b0d41650c22514897a60adee61363384269e3144a4af"} Jan 28 15:45:40 crc kubenswrapper[4981]: I0128 15:45:40.600085 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" event={"ID":"b2a912b2-d6a7-4cd5-8cba-66b942182410","Type":"ContainerStarted","Data":"9a05ee8e5c3c5330b1ba4d94e8d1d6d39fc3b406caaad6e98425d1b9d5dbe17e"} Jan 28 15:45:40 crc kubenswrapper[4981]: I0128 15:45:40.620929 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" podStartSLOduration=2.146902546 podStartE2EDuration="2.620911125s" podCreationTimestamp="2026-01-28 15:45:38 +0000 UTC" firstStartedPulling="2026-01-28 15:45:39.579930214 +0000 UTC m=+2551.032088455" lastFinishedPulling="2026-01-28 15:45:40.053938783 +0000 UTC m=+2551.506097034" observedRunningTime="2026-01-28 15:45:40.614003844 +0000 UTC m=+2552.066162085" watchObservedRunningTime="2026-01-28 15:45:40.620911125 +0000 UTC m=+2552.073069366" Jan 28 15:45:43 crc kubenswrapper[4981]: I0128 15:45:43.318718 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:45:43 crc kubenswrapper[4981]: E0128 15:45:43.320490 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:45:58 crc kubenswrapper[4981]: I0128 15:45:58.319405 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:45:58 crc kubenswrapper[4981]: E0128 15:45:58.320665 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:46:10 crc kubenswrapper[4981]: I0128 15:46:10.319349 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:46:10 crc kubenswrapper[4981]: E0128 15:46:10.320544 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:46:24 crc kubenswrapper[4981]: I0128 15:46:24.319634 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:46:24 crc kubenswrapper[4981]: E0128 15:46:24.321163 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:46:36 crc kubenswrapper[4981]: I0128 15:46:36.319552 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:46:36 crc kubenswrapper[4981]: E0128 15:46:36.320597 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:46:47 crc kubenswrapper[4981]: I0128 15:46:47.319621 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:46:47 crc kubenswrapper[4981]: E0128 15:46:47.320311 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:46:58 crc kubenswrapper[4981]: I0128 15:46:58.319420 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:46:58 crc kubenswrapper[4981]: E0128 15:46:58.321134 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:47:09 crc kubenswrapper[4981]: I0128 15:47:09.321750 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:47:09 crc kubenswrapper[4981]: E0128 15:47:09.323056 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:47:21 crc kubenswrapper[4981]: I0128 15:47:21.318869 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:47:21 crc kubenswrapper[4981]: E0128 15:47:21.319904 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:47:34 crc kubenswrapper[4981]: I0128 15:47:34.320040 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:47:34 crc kubenswrapper[4981]: E0128 15:47:34.321109 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:47:49 crc kubenswrapper[4981]: I0128 15:47:49.324726 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:47:49 crc kubenswrapper[4981]: E0128 15:47:49.326772 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:48:03 crc kubenswrapper[4981]: I0128 15:48:03.011391 4981 generic.go:334] "Generic (PLEG): container finished" podID="b2a912b2-d6a7-4cd5-8cba-66b942182410" containerID="daa503c390ca91ace824b0d41650c22514897a60adee61363384269e3144a4af" exitCode=0 Jan 28 15:48:03 crc kubenswrapper[4981]: I0128 15:48:03.011522 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" event={"ID":"b2a912b2-d6a7-4cd5-8cba-66b942182410","Type":"ContainerDied","Data":"daa503c390ca91ace824b0d41650c22514897a60adee61363384269e3144a4af"} Jan 28 15:48:03 crc kubenswrapper[4981]: I0128 15:48:03.320599 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:48:03 crc kubenswrapper[4981]: E0128 15:48:03.321491 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.474345 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.603052 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-2\") pod \"b2a912b2-d6a7-4cd5-8cba-66b942182410\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.603270 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-0\") pod \"b2a912b2-d6a7-4cd5-8cba-66b942182410\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.603350 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tll2t\" (UniqueName: \"kubernetes.io/projected/b2a912b2-d6a7-4cd5-8cba-66b942182410-kube-api-access-tll2t\") pod \"b2a912b2-d6a7-4cd5-8cba-66b942182410\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.603388 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ssh-key-openstack-edpm-ipam\") pod \"b2a912b2-d6a7-4cd5-8cba-66b942182410\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.603426 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-telemetry-combined-ca-bundle\") pod \"b2a912b2-d6a7-4cd5-8cba-66b942182410\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.603472 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-inventory\") pod \"b2a912b2-d6a7-4cd5-8cba-66b942182410\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.603614 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-1\") pod \"b2a912b2-d6a7-4cd5-8cba-66b942182410\" (UID: \"b2a912b2-d6a7-4cd5-8cba-66b942182410\") " Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.609247 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b2a912b2-d6a7-4cd5-8cba-66b942182410" (UID: "b2a912b2-d6a7-4cd5-8cba-66b942182410"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.636919 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2a912b2-d6a7-4cd5-8cba-66b942182410-kube-api-access-tll2t" (OuterVolumeSpecName: "kube-api-access-tll2t") pod "b2a912b2-d6a7-4cd5-8cba-66b942182410" (UID: "b2a912b2-d6a7-4cd5-8cba-66b942182410"). InnerVolumeSpecName "kube-api-access-tll2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.641845 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "b2a912b2-d6a7-4cd5-8cba-66b942182410" (UID: "b2a912b2-d6a7-4cd5-8cba-66b942182410"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.642443 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "b2a912b2-d6a7-4cd5-8cba-66b942182410" (UID: "b2a912b2-d6a7-4cd5-8cba-66b942182410"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.647667 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b2a912b2-d6a7-4cd5-8cba-66b942182410" (UID: "b2a912b2-d6a7-4cd5-8cba-66b942182410"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.648746 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "b2a912b2-d6a7-4cd5-8cba-66b942182410" (UID: "b2a912b2-d6a7-4cd5-8cba-66b942182410"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.660636 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-inventory" (OuterVolumeSpecName: "inventory") pod "b2a912b2-d6a7-4cd5-8cba-66b942182410" (UID: "b2a912b2-d6a7-4cd5-8cba-66b942182410"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.706063 4981 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.706113 4981 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.706128 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tll2t\" (UniqueName: \"kubernetes.io/projected/b2a912b2-d6a7-4cd5-8cba-66b942182410-kube-api-access-tll2t\") on node \"crc\" DevicePath \"\"" Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.706140 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.706151 4981 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.706162 4981 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 15:48:04 crc kubenswrapper[4981]: I0128 15:48:04.706176 4981 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b2a912b2-d6a7-4cd5-8cba-66b942182410-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 28 15:48:05 crc kubenswrapper[4981]: I0128 15:48:05.034735 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" event={"ID":"b2a912b2-d6a7-4cd5-8cba-66b942182410","Type":"ContainerDied","Data":"9a05ee8e5c3c5330b1ba4d94e8d1d6d39fc3b406caaad6e98425d1b9d5dbe17e"} Jan 28 15:48:05 crc kubenswrapper[4981]: I0128 15:48:05.034781 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a05ee8e5c3c5330b1ba4d94e8d1d6d39fc3b406caaad6e98425d1b9d5dbe17e" Jan 28 15:48:05 crc kubenswrapper[4981]: I0128 15:48:05.034800 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj" Jan 28 15:48:17 crc kubenswrapper[4981]: I0128 15:48:17.318797 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:48:17 crc kubenswrapper[4981]: E0128 15:48:17.319726 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:48:29 crc kubenswrapper[4981]: I0128 15:48:29.329849 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:48:30 crc kubenswrapper[4981]: I0128 15:48:30.355637 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"b400ebc18b32527d4a290e49d092858c1b4201e6d520ad3ffd416135ae56e45e"} Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.418827 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 15:49:10 crc kubenswrapper[4981]: E0128 15:49:10.419986 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2a912b2-d6a7-4cd5-8cba-66b942182410" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.420010 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2a912b2-d6a7-4cd5-8cba-66b942182410" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.420421 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2a912b2-d6a7-4cd5-8cba-66b942182410" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.421329 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.421453 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.424594 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.424907 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.425074 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-kzkdq" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.425244 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.543287 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.543383 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s74k4\" (UniqueName: \"kubernetes.io/projected/c34b143a-0284-461d-a788-106a5f6dca6c-kube-api-access-s74k4\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.543418 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c34b143a-0284-461d-a788-106a5f6dca6c-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.543640 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c34b143a-0284-461d-a788-106a5f6dca6c-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.543880 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.544160 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.544377 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c34b143a-0284-461d-a788-106a5f6dca6c-config-data\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.544475 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c34b143a-0284-461d-a788-106a5f6dca6c-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.544533 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.646577 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.646693 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.646770 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c34b143a-0284-461d-a788-106a5f6dca6c-config-data\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.646798 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c34b143a-0284-461d-a788-106a5f6dca6c-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.646825 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.646864 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.646920 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s74k4\" (UniqueName: \"kubernetes.io/projected/c34b143a-0284-461d-a788-106a5f6dca6c-kube-api-access-s74k4\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.646947 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c34b143a-0284-461d-a788-106a5f6dca6c-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.646970 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c34b143a-0284-461d-a788-106a5f6dca6c-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.647368 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c34b143a-0284-461d-a788-106a5f6dca6c-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.647412 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.647464 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c34b143a-0284-461d-a788-106a5f6dca6c-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.648227 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c34b143a-0284-461d-a788-106a5f6dca6c-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.649164 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c34b143a-0284-461d-a788-106a5f6dca6c-config-data\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.663867 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.664028 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.664087 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.680285 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s74k4\" (UniqueName: \"kubernetes.io/projected/c34b143a-0284-461d-a788-106a5f6dca6c-kube-api-access-s74k4\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.683545 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " pod="openstack/tempest-tests-tempest" Jan 28 15:49:10 crc kubenswrapper[4981]: I0128 15:49:10.759969 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 15:49:11 crc kubenswrapper[4981]: I0128 15:49:11.223412 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 15:49:11 crc kubenswrapper[4981]: I0128 15:49:11.238858 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:49:11 crc kubenswrapper[4981]: I0128 15:49:11.713929 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c34b143a-0284-461d-a788-106a5f6dca6c","Type":"ContainerStarted","Data":"24930454d4256a90f8577289750f63ed4c21f31a8491af4fa0a57f630deaded2"} Jan 28 15:50:09 crc kubenswrapper[4981]: E0128 15:50:09.688852 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 28 15:50:09 crc kubenswrapper[4981]: E0128 15:50:09.689698 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s74k4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(c34b143a-0284-461d-a788-106a5f6dca6c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 15:50:09 crc kubenswrapper[4981]: E0128 15:50:09.691143 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="c34b143a-0284-461d-a788-106a5f6dca6c" Jan 28 15:50:10 crc kubenswrapper[4981]: E0128 15:50:10.365205 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="c34b143a-0284-461d-a788-106a5f6dca6c" Jan 28 15:50:23 crc kubenswrapper[4981]: I0128 15:50:23.020024 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 28 15:50:24 crc kubenswrapper[4981]: I0128 15:50:24.505502 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c34b143a-0284-461d-a788-106a5f6dca6c","Type":"ContainerStarted","Data":"bc1080e0f851f577c791f267752187334b1531601ceabe4cd9250bc7f6e32822"} Jan 28 15:50:24 crc kubenswrapper[4981]: I0128 15:50:24.525272 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.746882844 podStartE2EDuration="1m15.52525415s" podCreationTimestamp="2026-01-28 15:49:09 +0000 UTC" firstStartedPulling="2026-01-28 15:49:11.238569664 +0000 UTC m=+2762.690727905" lastFinishedPulling="2026-01-28 15:50:23.01694097 +0000 UTC m=+2834.469099211" observedRunningTime="2026-01-28 15:50:24.523908634 +0000 UTC m=+2835.976066925" watchObservedRunningTime="2026-01-28 15:50:24.52525415 +0000 UTC m=+2835.977412391" Jan 28 15:50:29 crc kubenswrapper[4981]: I0128 15:50:29.928440 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nwmj8"] Jan 28 15:50:29 crc kubenswrapper[4981]: I0128 15:50:29.934507 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:29 crc kubenswrapper[4981]: I0128 15:50:29.943530 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwmj8"] Jan 28 15:50:29 crc kubenswrapper[4981]: I0128 15:50:29.977160 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37452c31-8de8-40fb-b030-237807e91bb5-utilities\") pod \"redhat-marketplace-nwmj8\" (UID: \"37452c31-8de8-40fb-b030-237807e91bb5\") " pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:29 crc kubenswrapper[4981]: I0128 15:50:29.977762 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zprjt\" (UniqueName: \"kubernetes.io/projected/37452c31-8de8-40fb-b030-237807e91bb5-kube-api-access-zprjt\") pod \"redhat-marketplace-nwmj8\" (UID: \"37452c31-8de8-40fb-b030-237807e91bb5\") " pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:29 crc kubenswrapper[4981]: I0128 15:50:29.977890 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37452c31-8de8-40fb-b030-237807e91bb5-catalog-content\") pod \"redhat-marketplace-nwmj8\" (UID: \"37452c31-8de8-40fb-b030-237807e91bb5\") " pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:30 crc kubenswrapper[4981]: I0128 15:50:30.079412 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37452c31-8de8-40fb-b030-237807e91bb5-utilities\") pod \"redhat-marketplace-nwmj8\" (UID: \"37452c31-8de8-40fb-b030-237807e91bb5\") " pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:30 crc kubenswrapper[4981]: I0128 15:50:30.079495 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zprjt\" (UniqueName: \"kubernetes.io/projected/37452c31-8de8-40fb-b030-237807e91bb5-kube-api-access-zprjt\") pod \"redhat-marketplace-nwmj8\" (UID: \"37452c31-8de8-40fb-b030-237807e91bb5\") " pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:30 crc kubenswrapper[4981]: I0128 15:50:30.079882 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37452c31-8de8-40fb-b030-237807e91bb5-catalog-content\") pod \"redhat-marketplace-nwmj8\" (UID: \"37452c31-8de8-40fb-b030-237807e91bb5\") " pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:30 crc kubenswrapper[4981]: I0128 15:50:30.080078 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37452c31-8de8-40fb-b030-237807e91bb5-utilities\") pod \"redhat-marketplace-nwmj8\" (UID: \"37452c31-8de8-40fb-b030-237807e91bb5\") " pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:30 crc kubenswrapper[4981]: I0128 15:50:30.080295 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37452c31-8de8-40fb-b030-237807e91bb5-catalog-content\") pod \"redhat-marketplace-nwmj8\" (UID: \"37452c31-8de8-40fb-b030-237807e91bb5\") " pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:30 crc kubenswrapper[4981]: I0128 15:50:30.100234 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zprjt\" (UniqueName: \"kubernetes.io/projected/37452c31-8de8-40fb-b030-237807e91bb5-kube-api-access-zprjt\") pod \"redhat-marketplace-nwmj8\" (UID: \"37452c31-8de8-40fb-b030-237807e91bb5\") " pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:30 crc kubenswrapper[4981]: I0128 15:50:30.257155 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:30 crc kubenswrapper[4981]: I0128 15:50:30.922603 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwmj8"] Jan 28 15:50:31 crc kubenswrapper[4981]: I0128 15:50:31.601281 4981 generic.go:334] "Generic (PLEG): container finished" podID="37452c31-8de8-40fb-b030-237807e91bb5" containerID="453d8e7ec2946ff1f6a2037661ce589d47b14206201e31111c855e472b2be429" exitCode=0 Jan 28 15:50:31 crc kubenswrapper[4981]: I0128 15:50:31.601354 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwmj8" event={"ID":"37452c31-8de8-40fb-b030-237807e91bb5","Type":"ContainerDied","Data":"453d8e7ec2946ff1f6a2037661ce589d47b14206201e31111c855e472b2be429"} Jan 28 15:50:31 crc kubenswrapper[4981]: I0128 15:50:31.601545 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwmj8" event={"ID":"37452c31-8de8-40fb-b030-237807e91bb5","Type":"ContainerStarted","Data":"7923c3f1dc4a715d426f7d76792ad6c225ed3b01ff016bc36f98f7ddf86e198b"} Jan 28 15:50:32 crc kubenswrapper[4981]: I0128 15:50:32.613043 4981 generic.go:334] "Generic (PLEG): container finished" podID="37452c31-8de8-40fb-b030-237807e91bb5" containerID="7c56c8240ca3eff4d7399e38593484ee7dd1b63bce976d0d144dc8d2f09ca7c9" exitCode=0 Jan 28 15:50:32 crc kubenswrapper[4981]: I0128 15:50:32.613105 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwmj8" event={"ID":"37452c31-8de8-40fb-b030-237807e91bb5","Type":"ContainerDied","Data":"7c56c8240ca3eff4d7399e38593484ee7dd1b63bce976d0d144dc8d2f09ca7c9"} Jan 28 15:50:33 crc kubenswrapper[4981]: I0128 15:50:33.623625 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwmj8" event={"ID":"37452c31-8de8-40fb-b030-237807e91bb5","Type":"ContainerStarted","Data":"ce4d8461a6d446f56337138bd31b8e4a58efb5d8d784bb0fa9bb3c560cf5a858"} Jan 28 15:50:33 crc kubenswrapper[4981]: I0128 15:50:33.644899 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nwmj8" podStartSLOduration=2.894777405 podStartE2EDuration="4.644879042s" podCreationTimestamp="2026-01-28 15:50:29 +0000 UTC" firstStartedPulling="2026-01-28 15:50:31.602808358 +0000 UTC m=+2843.054966609" lastFinishedPulling="2026-01-28 15:50:33.352910005 +0000 UTC m=+2844.805068246" observedRunningTime="2026-01-28 15:50:33.638761491 +0000 UTC m=+2845.090919732" watchObservedRunningTime="2026-01-28 15:50:33.644879042 +0000 UTC m=+2845.097037283" Jan 28 15:50:40 crc kubenswrapper[4981]: I0128 15:50:40.257762 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:40 crc kubenswrapper[4981]: I0128 15:50:40.258062 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:40 crc kubenswrapper[4981]: I0128 15:50:40.306545 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:40 crc kubenswrapper[4981]: I0128 15:50:40.749463 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:40 crc kubenswrapper[4981]: I0128 15:50:40.809534 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwmj8"] Jan 28 15:50:42 crc kubenswrapper[4981]: I0128 15:50:42.711537 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nwmj8" podUID="37452c31-8de8-40fb-b030-237807e91bb5" containerName="registry-server" containerID="cri-o://ce4d8461a6d446f56337138bd31b8e4a58efb5d8d784bb0fa9bb3c560cf5a858" gracePeriod=2 Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.253971 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.359630 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37452c31-8de8-40fb-b030-237807e91bb5-catalog-content\") pod \"37452c31-8de8-40fb-b030-237807e91bb5\" (UID: \"37452c31-8de8-40fb-b030-237807e91bb5\") " Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.359980 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37452c31-8de8-40fb-b030-237807e91bb5-utilities\") pod \"37452c31-8de8-40fb-b030-237807e91bb5\" (UID: \"37452c31-8de8-40fb-b030-237807e91bb5\") " Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.360080 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zprjt\" (UniqueName: \"kubernetes.io/projected/37452c31-8de8-40fb-b030-237807e91bb5-kube-api-access-zprjt\") pod \"37452c31-8de8-40fb-b030-237807e91bb5\" (UID: \"37452c31-8de8-40fb-b030-237807e91bb5\") " Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.360842 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37452c31-8de8-40fb-b030-237807e91bb5-utilities" (OuterVolumeSpecName: "utilities") pod "37452c31-8de8-40fb-b030-237807e91bb5" (UID: "37452c31-8de8-40fb-b030-237807e91bb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.373440 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37452c31-8de8-40fb-b030-237807e91bb5-kube-api-access-zprjt" (OuterVolumeSpecName: "kube-api-access-zprjt") pod "37452c31-8de8-40fb-b030-237807e91bb5" (UID: "37452c31-8de8-40fb-b030-237807e91bb5"). InnerVolumeSpecName "kube-api-access-zprjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.463142 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37452c31-8de8-40fb-b030-237807e91bb5-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.463234 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zprjt\" (UniqueName: \"kubernetes.io/projected/37452c31-8de8-40fb-b030-237807e91bb5-kube-api-access-zprjt\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.721020 4981 generic.go:334] "Generic (PLEG): container finished" podID="37452c31-8de8-40fb-b030-237807e91bb5" containerID="ce4d8461a6d446f56337138bd31b8e4a58efb5d8d784bb0fa9bb3c560cf5a858" exitCode=0 Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.721067 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwmj8" event={"ID":"37452c31-8de8-40fb-b030-237807e91bb5","Type":"ContainerDied","Data":"ce4d8461a6d446f56337138bd31b8e4a58efb5d8d784bb0fa9bb3c560cf5a858"} Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.721077 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nwmj8" Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.721097 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nwmj8" event={"ID":"37452c31-8de8-40fb-b030-237807e91bb5","Type":"ContainerDied","Data":"7923c3f1dc4a715d426f7d76792ad6c225ed3b01ff016bc36f98f7ddf86e198b"} Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.721117 4981 scope.go:117] "RemoveContainer" containerID="ce4d8461a6d446f56337138bd31b8e4a58efb5d8d784bb0fa9bb3c560cf5a858" Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.746556 4981 scope.go:117] "RemoveContainer" containerID="7c56c8240ca3eff4d7399e38593484ee7dd1b63bce976d0d144dc8d2f09ca7c9" Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.767513 4981 scope.go:117] "RemoveContainer" containerID="453d8e7ec2946ff1f6a2037661ce589d47b14206201e31111c855e472b2be429" Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.804952 4981 scope.go:117] "RemoveContainer" containerID="ce4d8461a6d446f56337138bd31b8e4a58efb5d8d784bb0fa9bb3c560cf5a858" Jan 28 15:50:43 crc kubenswrapper[4981]: E0128 15:50:43.805423 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce4d8461a6d446f56337138bd31b8e4a58efb5d8d784bb0fa9bb3c560cf5a858\": container with ID starting with ce4d8461a6d446f56337138bd31b8e4a58efb5d8d784bb0fa9bb3c560cf5a858 not found: ID does not exist" containerID="ce4d8461a6d446f56337138bd31b8e4a58efb5d8d784bb0fa9bb3c560cf5a858" Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.805462 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce4d8461a6d446f56337138bd31b8e4a58efb5d8d784bb0fa9bb3c560cf5a858"} err="failed to get container status \"ce4d8461a6d446f56337138bd31b8e4a58efb5d8d784bb0fa9bb3c560cf5a858\": rpc error: code = NotFound desc = could not find container \"ce4d8461a6d446f56337138bd31b8e4a58efb5d8d784bb0fa9bb3c560cf5a858\": container with ID starting with ce4d8461a6d446f56337138bd31b8e4a58efb5d8d784bb0fa9bb3c560cf5a858 not found: ID does not exist" Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.805483 4981 scope.go:117] "RemoveContainer" containerID="7c56c8240ca3eff4d7399e38593484ee7dd1b63bce976d0d144dc8d2f09ca7c9" Jan 28 15:50:43 crc kubenswrapper[4981]: E0128 15:50:43.805748 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c56c8240ca3eff4d7399e38593484ee7dd1b63bce976d0d144dc8d2f09ca7c9\": container with ID starting with 7c56c8240ca3eff4d7399e38593484ee7dd1b63bce976d0d144dc8d2f09ca7c9 not found: ID does not exist" containerID="7c56c8240ca3eff4d7399e38593484ee7dd1b63bce976d0d144dc8d2f09ca7c9" Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.805862 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c56c8240ca3eff4d7399e38593484ee7dd1b63bce976d0d144dc8d2f09ca7c9"} err="failed to get container status \"7c56c8240ca3eff4d7399e38593484ee7dd1b63bce976d0d144dc8d2f09ca7c9\": rpc error: code = NotFound desc = could not find container \"7c56c8240ca3eff4d7399e38593484ee7dd1b63bce976d0d144dc8d2f09ca7c9\": container with ID starting with 7c56c8240ca3eff4d7399e38593484ee7dd1b63bce976d0d144dc8d2f09ca7c9 not found: ID does not exist" Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.805959 4981 scope.go:117] "RemoveContainer" containerID="453d8e7ec2946ff1f6a2037661ce589d47b14206201e31111c855e472b2be429" Jan 28 15:50:43 crc kubenswrapper[4981]: E0128 15:50:43.806387 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"453d8e7ec2946ff1f6a2037661ce589d47b14206201e31111c855e472b2be429\": container with ID starting with 453d8e7ec2946ff1f6a2037661ce589d47b14206201e31111c855e472b2be429 not found: ID does not exist" containerID="453d8e7ec2946ff1f6a2037661ce589d47b14206201e31111c855e472b2be429" Jan 28 15:50:43 crc kubenswrapper[4981]: I0128 15:50:43.806409 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"453d8e7ec2946ff1f6a2037661ce589d47b14206201e31111c855e472b2be429"} err="failed to get container status \"453d8e7ec2946ff1f6a2037661ce589d47b14206201e31111c855e472b2be429\": rpc error: code = NotFound desc = could not find container \"453d8e7ec2946ff1f6a2037661ce589d47b14206201e31111c855e472b2be429\": container with ID starting with 453d8e7ec2946ff1f6a2037661ce589d47b14206201e31111c855e472b2be429 not found: ID does not exist" Jan 28 15:50:44 crc kubenswrapper[4981]: I0128 15:50:44.006089 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37452c31-8de8-40fb-b030-237807e91bb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37452c31-8de8-40fb-b030-237807e91bb5" (UID: "37452c31-8de8-40fb-b030-237807e91bb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:50:44 crc kubenswrapper[4981]: I0128 15:50:44.057445 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwmj8"] Jan 28 15:50:44 crc kubenswrapper[4981]: I0128 15:50:44.068319 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nwmj8"] Jan 28 15:50:44 crc kubenswrapper[4981]: I0128 15:50:44.074270 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37452c31-8de8-40fb-b030-237807e91bb5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:50:45 crc kubenswrapper[4981]: I0128 15:50:45.327886 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37452c31-8de8-40fb-b030-237807e91bb5" path="/var/lib/kubelet/pods/37452c31-8de8-40fb-b030-237807e91bb5/volumes" Jan 28 15:50:49 crc kubenswrapper[4981]: I0128 15:50:49.897714 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:50:49 crc kubenswrapper[4981]: I0128 15:50:49.898231 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:51:19 crc kubenswrapper[4981]: I0128 15:51:19.897262 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:51:19 crc kubenswrapper[4981]: I0128 15:51:19.898821 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:51:49 crc kubenswrapper[4981]: I0128 15:51:49.897846 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:51:49 crc kubenswrapper[4981]: I0128 15:51:49.898427 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:51:49 crc kubenswrapper[4981]: I0128 15:51:49.898491 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:51:49 crc kubenswrapper[4981]: I0128 15:51:49.899652 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b400ebc18b32527d4a290e49d092858c1b4201e6d520ad3ffd416135ae56e45e"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:51:49 crc kubenswrapper[4981]: I0128 15:51:49.899723 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://b400ebc18b32527d4a290e49d092858c1b4201e6d520ad3ffd416135ae56e45e" gracePeriod=600 Jan 28 15:51:50 crc kubenswrapper[4981]: I0128 15:51:50.411018 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="b400ebc18b32527d4a290e49d092858c1b4201e6d520ad3ffd416135ae56e45e" exitCode=0 Jan 28 15:51:50 crc kubenswrapper[4981]: I0128 15:51:50.411152 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"b400ebc18b32527d4a290e49d092858c1b4201e6d520ad3ffd416135ae56e45e"} Jan 28 15:51:50 crc kubenswrapper[4981]: I0128 15:51:50.411496 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8"} Jan 28 15:51:50 crc kubenswrapper[4981]: I0128 15:51:50.411525 4981 scope.go:117] "RemoveContainer" containerID="7598569239c2cf6d2e3bce6aea6b218b86d9bea8845ffb93394e9b66d8d2064f" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.421879 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-whtzf"] Jan 28 15:52:44 crc kubenswrapper[4981]: E0128 15:52:44.423286 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37452c31-8de8-40fb-b030-237807e91bb5" containerName="extract-content" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.423301 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="37452c31-8de8-40fb-b030-237807e91bb5" containerName="extract-content" Jan 28 15:52:44 crc kubenswrapper[4981]: E0128 15:52:44.423326 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37452c31-8de8-40fb-b030-237807e91bb5" containerName="extract-utilities" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.423332 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="37452c31-8de8-40fb-b030-237807e91bb5" containerName="extract-utilities" Jan 28 15:52:44 crc kubenswrapper[4981]: E0128 15:52:44.423348 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37452c31-8de8-40fb-b030-237807e91bb5" containerName="registry-server" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.423354 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="37452c31-8de8-40fb-b030-237807e91bb5" containerName="registry-server" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.423533 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="37452c31-8de8-40fb-b030-237807e91bb5" containerName="registry-server" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.424775 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.450120 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-whtzf"] Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.598400 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/031a05bd-ab79-4385-82dc-722ac8aa0f40-utilities\") pod \"community-operators-whtzf\" (UID: \"031a05bd-ab79-4385-82dc-722ac8aa0f40\") " pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.598443 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/031a05bd-ab79-4385-82dc-722ac8aa0f40-catalog-content\") pod \"community-operators-whtzf\" (UID: \"031a05bd-ab79-4385-82dc-722ac8aa0f40\") " pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.598469 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4h4hz\" (UniqueName: \"kubernetes.io/projected/031a05bd-ab79-4385-82dc-722ac8aa0f40-kube-api-access-4h4hz\") pod \"community-operators-whtzf\" (UID: \"031a05bd-ab79-4385-82dc-722ac8aa0f40\") " pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.700449 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/031a05bd-ab79-4385-82dc-722ac8aa0f40-utilities\") pod \"community-operators-whtzf\" (UID: \"031a05bd-ab79-4385-82dc-722ac8aa0f40\") " pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.700744 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/031a05bd-ab79-4385-82dc-722ac8aa0f40-catalog-content\") pod \"community-operators-whtzf\" (UID: \"031a05bd-ab79-4385-82dc-722ac8aa0f40\") " pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.700856 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4h4hz\" (UniqueName: \"kubernetes.io/projected/031a05bd-ab79-4385-82dc-722ac8aa0f40-kube-api-access-4h4hz\") pod \"community-operators-whtzf\" (UID: \"031a05bd-ab79-4385-82dc-722ac8aa0f40\") " pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.700965 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/031a05bd-ab79-4385-82dc-722ac8aa0f40-utilities\") pod \"community-operators-whtzf\" (UID: \"031a05bd-ab79-4385-82dc-722ac8aa0f40\") " pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.701253 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/031a05bd-ab79-4385-82dc-722ac8aa0f40-catalog-content\") pod \"community-operators-whtzf\" (UID: \"031a05bd-ab79-4385-82dc-722ac8aa0f40\") " pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.721504 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4h4hz\" (UniqueName: \"kubernetes.io/projected/031a05bd-ab79-4385-82dc-722ac8aa0f40-kube-api-access-4h4hz\") pod \"community-operators-whtzf\" (UID: \"031a05bd-ab79-4385-82dc-722ac8aa0f40\") " pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:52:44 crc kubenswrapper[4981]: I0128 15:52:44.796935 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:52:45 crc kubenswrapper[4981]: I0128 15:52:45.337409 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-whtzf"] Jan 28 15:52:45 crc kubenswrapper[4981]: I0128 15:52:45.976465 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whtzf" event={"ID":"031a05bd-ab79-4385-82dc-722ac8aa0f40","Type":"ContainerStarted","Data":"f330f2c80a6d6a6fae71146f0e2dcd292d776f4339f20056a2b1342253d66feb"} Jan 28 15:52:45 crc kubenswrapper[4981]: I0128 15:52:45.976806 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whtzf" event={"ID":"031a05bd-ab79-4385-82dc-722ac8aa0f40","Type":"ContainerStarted","Data":"e09f4dc25b7dc7bae3fd6ee94350a3a9b00ee3d89e75c98e2d7f3a97d6a86e85"} Jan 28 15:52:46 crc kubenswrapper[4981]: I0128 15:52:46.986811 4981 generic.go:334] "Generic (PLEG): container finished" podID="031a05bd-ab79-4385-82dc-722ac8aa0f40" containerID="f330f2c80a6d6a6fae71146f0e2dcd292d776f4339f20056a2b1342253d66feb" exitCode=0 Jan 28 15:52:46 crc kubenswrapper[4981]: I0128 15:52:46.986957 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whtzf" event={"ID":"031a05bd-ab79-4385-82dc-722ac8aa0f40","Type":"ContainerDied","Data":"f330f2c80a6d6a6fae71146f0e2dcd292d776f4339f20056a2b1342253d66feb"} Jan 28 15:52:50 crc kubenswrapper[4981]: I0128 15:52:50.014802 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whtzf" event={"ID":"031a05bd-ab79-4385-82dc-722ac8aa0f40","Type":"ContainerStarted","Data":"bca8aabf62855a7f6b3380fdc5460b67976cfd531e0a1f89673afc8ba198f854"} Jan 28 15:52:51 crc kubenswrapper[4981]: I0128 15:52:51.025135 4981 generic.go:334] "Generic (PLEG): container finished" podID="031a05bd-ab79-4385-82dc-722ac8aa0f40" containerID="bca8aabf62855a7f6b3380fdc5460b67976cfd531e0a1f89673afc8ba198f854" exitCode=0 Jan 28 15:52:51 crc kubenswrapper[4981]: I0128 15:52:51.025178 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whtzf" event={"ID":"031a05bd-ab79-4385-82dc-722ac8aa0f40","Type":"ContainerDied","Data":"bca8aabf62855a7f6b3380fdc5460b67976cfd531e0a1f89673afc8ba198f854"} Jan 28 15:52:58 crc kubenswrapper[4981]: I0128 15:52:58.109712 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whtzf" event={"ID":"031a05bd-ab79-4385-82dc-722ac8aa0f40","Type":"ContainerStarted","Data":"7c4edad1ca051d0d6a4e1ac86eec9bbb3be38cec4f19ee767465a982bc3fbaee"} Jan 28 15:52:58 crc kubenswrapper[4981]: I0128 15:52:58.132283 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-whtzf" podStartSLOduration=3.629430661 podStartE2EDuration="14.13226636s" podCreationTimestamp="2026-01-28 15:52:44 +0000 UTC" firstStartedPulling="2026-01-28 15:52:46.989447666 +0000 UTC m=+2978.441605907" lastFinishedPulling="2026-01-28 15:52:57.492283365 +0000 UTC m=+2988.944441606" observedRunningTime="2026-01-28 15:52:58.131456399 +0000 UTC m=+2989.583614660" watchObservedRunningTime="2026-01-28 15:52:58.13226636 +0000 UTC m=+2989.584424611" Jan 28 15:53:04 crc kubenswrapper[4981]: I0128 15:53:04.797806 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:53:04 crc kubenswrapper[4981]: I0128 15:53:04.798356 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:53:04 crc kubenswrapper[4981]: I0128 15:53:04.847661 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:53:05 crc kubenswrapper[4981]: I0128 15:53:05.224707 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:53:05 crc kubenswrapper[4981]: I0128 15:53:05.288712 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-whtzf"] Jan 28 15:53:07 crc kubenswrapper[4981]: I0128 15:53:07.182302 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-whtzf" podUID="031a05bd-ab79-4385-82dc-722ac8aa0f40" containerName="registry-server" containerID="cri-o://7c4edad1ca051d0d6a4e1ac86eec9bbb3be38cec4f19ee767465a982bc3fbaee" gracePeriod=2 Jan 28 15:53:07 crc kubenswrapper[4981]: I0128 15:53:07.700241 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:53:07 crc kubenswrapper[4981]: I0128 15:53:07.721388 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/031a05bd-ab79-4385-82dc-722ac8aa0f40-utilities\") pod \"031a05bd-ab79-4385-82dc-722ac8aa0f40\" (UID: \"031a05bd-ab79-4385-82dc-722ac8aa0f40\") " Jan 28 15:53:07 crc kubenswrapper[4981]: I0128 15:53:07.721589 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4h4hz\" (UniqueName: \"kubernetes.io/projected/031a05bd-ab79-4385-82dc-722ac8aa0f40-kube-api-access-4h4hz\") pod \"031a05bd-ab79-4385-82dc-722ac8aa0f40\" (UID: \"031a05bd-ab79-4385-82dc-722ac8aa0f40\") " Jan 28 15:53:07 crc kubenswrapper[4981]: I0128 15:53:07.721805 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/031a05bd-ab79-4385-82dc-722ac8aa0f40-catalog-content\") pod \"031a05bd-ab79-4385-82dc-722ac8aa0f40\" (UID: \"031a05bd-ab79-4385-82dc-722ac8aa0f40\") " Jan 28 15:53:07 crc kubenswrapper[4981]: I0128 15:53:07.722601 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/031a05bd-ab79-4385-82dc-722ac8aa0f40-utilities" (OuterVolumeSpecName: "utilities") pod "031a05bd-ab79-4385-82dc-722ac8aa0f40" (UID: "031a05bd-ab79-4385-82dc-722ac8aa0f40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:53:07 crc kubenswrapper[4981]: I0128 15:53:07.725292 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/031a05bd-ab79-4385-82dc-722ac8aa0f40-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:53:07 crc kubenswrapper[4981]: I0128 15:53:07.730437 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/031a05bd-ab79-4385-82dc-722ac8aa0f40-kube-api-access-4h4hz" (OuterVolumeSpecName: "kube-api-access-4h4hz") pod "031a05bd-ab79-4385-82dc-722ac8aa0f40" (UID: "031a05bd-ab79-4385-82dc-722ac8aa0f40"). InnerVolumeSpecName "kube-api-access-4h4hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:53:07 crc kubenswrapper[4981]: I0128 15:53:07.798335 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/031a05bd-ab79-4385-82dc-722ac8aa0f40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "031a05bd-ab79-4385-82dc-722ac8aa0f40" (UID: "031a05bd-ab79-4385-82dc-722ac8aa0f40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:53:07 crc kubenswrapper[4981]: I0128 15:53:07.827437 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4h4hz\" (UniqueName: \"kubernetes.io/projected/031a05bd-ab79-4385-82dc-722ac8aa0f40-kube-api-access-4h4hz\") on node \"crc\" DevicePath \"\"" Jan 28 15:53:07 crc kubenswrapper[4981]: I0128 15:53:07.827485 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/031a05bd-ab79-4385-82dc-722ac8aa0f40-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:53:08 crc kubenswrapper[4981]: I0128 15:53:08.193613 4981 generic.go:334] "Generic (PLEG): container finished" podID="031a05bd-ab79-4385-82dc-722ac8aa0f40" containerID="7c4edad1ca051d0d6a4e1ac86eec9bbb3be38cec4f19ee767465a982bc3fbaee" exitCode=0 Jan 28 15:53:08 crc kubenswrapper[4981]: I0128 15:53:08.193652 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-whtzf" Jan 28 15:53:08 crc kubenswrapper[4981]: I0128 15:53:08.193679 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whtzf" event={"ID":"031a05bd-ab79-4385-82dc-722ac8aa0f40","Type":"ContainerDied","Data":"7c4edad1ca051d0d6a4e1ac86eec9bbb3be38cec4f19ee767465a982bc3fbaee"} Jan 28 15:53:08 crc kubenswrapper[4981]: I0128 15:53:08.194024 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-whtzf" event={"ID":"031a05bd-ab79-4385-82dc-722ac8aa0f40","Type":"ContainerDied","Data":"e09f4dc25b7dc7bae3fd6ee94350a3a9b00ee3d89e75c98e2d7f3a97d6a86e85"} Jan 28 15:53:08 crc kubenswrapper[4981]: I0128 15:53:08.194048 4981 scope.go:117] "RemoveContainer" containerID="7c4edad1ca051d0d6a4e1ac86eec9bbb3be38cec4f19ee767465a982bc3fbaee" Jan 28 15:53:08 crc kubenswrapper[4981]: I0128 15:53:08.221601 4981 scope.go:117] "RemoveContainer" containerID="bca8aabf62855a7f6b3380fdc5460b67976cfd531e0a1f89673afc8ba198f854" Jan 28 15:53:08 crc kubenswrapper[4981]: I0128 15:53:08.232607 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-whtzf"] Jan 28 15:53:08 crc kubenswrapper[4981]: I0128 15:53:08.246222 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-whtzf"] Jan 28 15:53:08 crc kubenswrapper[4981]: I0128 15:53:08.261571 4981 scope.go:117] "RemoveContainer" containerID="f330f2c80a6d6a6fae71146f0e2dcd292d776f4339f20056a2b1342253d66feb" Jan 28 15:53:08 crc kubenswrapper[4981]: I0128 15:53:08.293882 4981 scope.go:117] "RemoveContainer" containerID="7c4edad1ca051d0d6a4e1ac86eec9bbb3be38cec4f19ee767465a982bc3fbaee" Jan 28 15:53:08 crc kubenswrapper[4981]: E0128 15:53:08.296976 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c4edad1ca051d0d6a4e1ac86eec9bbb3be38cec4f19ee767465a982bc3fbaee\": container with ID starting with 7c4edad1ca051d0d6a4e1ac86eec9bbb3be38cec4f19ee767465a982bc3fbaee not found: ID does not exist" containerID="7c4edad1ca051d0d6a4e1ac86eec9bbb3be38cec4f19ee767465a982bc3fbaee" Jan 28 15:53:08 crc kubenswrapper[4981]: I0128 15:53:08.297024 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c4edad1ca051d0d6a4e1ac86eec9bbb3be38cec4f19ee767465a982bc3fbaee"} err="failed to get container status \"7c4edad1ca051d0d6a4e1ac86eec9bbb3be38cec4f19ee767465a982bc3fbaee\": rpc error: code = NotFound desc = could not find container \"7c4edad1ca051d0d6a4e1ac86eec9bbb3be38cec4f19ee767465a982bc3fbaee\": container with ID starting with 7c4edad1ca051d0d6a4e1ac86eec9bbb3be38cec4f19ee767465a982bc3fbaee not found: ID does not exist" Jan 28 15:53:08 crc kubenswrapper[4981]: I0128 15:53:08.297055 4981 scope.go:117] "RemoveContainer" containerID="bca8aabf62855a7f6b3380fdc5460b67976cfd531e0a1f89673afc8ba198f854" Jan 28 15:53:08 crc kubenswrapper[4981]: E0128 15:53:08.297510 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca8aabf62855a7f6b3380fdc5460b67976cfd531e0a1f89673afc8ba198f854\": container with ID starting with bca8aabf62855a7f6b3380fdc5460b67976cfd531e0a1f89673afc8ba198f854 not found: ID does not exist" containerID="bca8aabf62855a7f6b3380fdc5460b67976cfd531e0a1f89673afc8ba198f854" Jan 28 15:53:08 crc kubenswrapper[4981]: I0128 15:53:08.297612 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca8aabf62855a7f6b3380fdc5460b67976cfd531e0a1f89673afc8ba198f854"} err="failed to get container status \"bca8aabf62855a7f6b3380fdc5460b67976cfd531e0a1f89673afc8ba198f854\": rpc error: code = NotFound desc = could not find container \"bca8aabf62855a7f6b3380fdc5460b67976cfd531e0a1f89673afc8ba198f854\": container with ID starting with bca8aabf62855a7f6b3380fdc5460b67976cfd531e0a1f89673afc8ba198f854 not found: ID does not exist" Jan 28 15:53:08 crc kubenswrapper[4981]: I0128 15:53:08.297647 4981 scope.go:117] "RemoveContainer" containerID="f330f2c80a6d6a6fae71146f0e2dcd292d776f4339f20056a2b1342253d66feb" Jan 28 15:53:08 crc kubenswrapper[4981]: E0128 15:53:08.298088 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f330f2c80a6d6a6fae71146f0e2dcd292d776f4339f20056a2b1342253d66feb\": container with ID starting with f330f2c80a6d6a6fae71146f0e2dcd292d776f4339f20056a2b1342253d66feb not found: ID does not exist" containerID="f330f2c80a6d6a6fae71146f0e2dcd292d776f4339f20056a2b1342253d66feb" Jan 28 15:53:08 crc kubenswrapper[4981]: I0128 15:53:08.298114 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f330f2c80a6d6a6fae71146f0e2dcd292d776f4339f20056a2b1342253d66feb"} err="failed to get container status \"f330f2c80a6d6a6fae71146f0e2dcd292d776f4339f20056a2b1342253d66feb\": rpc error: code = NotFound desc = could not find container \"f330f2c80a6d6a6fae71146f0e2dcd292d776f4339f20056a2b1342253d66feb\": container with ID starting with f330f2c80a6d6a6fae71146f0e2dcd292d776f4339f20056a2b1342253d66feb not found: ID does not exist" Jan 28 15:53:09 crc kubenswrapper[4981]: I0128 15:53:09.329867 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="031a05bd-ab79-4385-82dc-722ac8aa0f40" path="/var/lib/kubelet/pods/031a05bd-ab79-4385-82dc-722ac8aa0f40/volumes" Jan 28 15:54:19 crc kubenswrapper[4981]: I0128 15:54:19.898019 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:54:19 crc kubenswrapper[4981]: I0128 15:54:19.898664 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.124719 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qnxlq"] Jan 28 15:54:30 crc kubenswrapper[4981]: E0128 15:54:30.125762 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="031a05bd-ab79-4385-82dc-722ac8aa0f40" containerName="registry-server" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.125781 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="031a05bd-ab79-4385-82dc-722ac8aa0f40" containerName="registry-server" Jan 28 15:54:30 crc kubenswrapper[4981]: E0128 15:54:30.125804 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="031a05bd-ab79-4385-82dc-722ac8aa0f40" containerName="extract-content" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.125812 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="031a05bd-ab79-4385-82dc-722ac8aa0f40" containerName="extract-content" Jan 28 15:54:30 crc kubenswrapper[4981]: E0128 15:54:30.125852 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="031a05bd-ab79-4385-82dc-722ac8aa0f40" containerName="extract-utilities" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.125862 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="031a05bd-ab79-4385-82dc-722ac8aa0f40" containerName="extract-utilities" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.126089 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="031a05bd-ab79-4385-82dc-722ac8aa0f40" containerName="registry-server" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.127718 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.154671 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qnxlq"] Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.221804 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqgfp\" (UniqueName: \"kubernetes.io/projected/cb1176f0-f72d-4902-9989-8f298277d707-kube-api-access-vqgfp\") pod \"certified-operators-qnxlq\" (UID: \"cb1176f0-f72d-4902-9989-8f298277d707\") " pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.221910 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb1176f0-f72d-4902-9989-8f298277d707-catalog-content\") pod \"certified-operators-qnxlq\" (UID: \"cb1176f0-f72d-4902-9989-8f298277d707\") " pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.221969 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb1176f0-f72d-4902-9989-8f298277d707-utilities\") pod \"certified-operators-qnxlq\" (UID: \"cb1176f0-f72d-4902-9989-8f298277d707\") " pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.324349 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqgfp\" (UniqueName: \"kubernetes.io/projected/cb1176f0-f72d-4902-9989-8f298277d707-kube-api-access-vqgfp\") pod \"certified-operators-qnxlq\" (UID: \"cb1176f0-f72d-4902-9989-8f298277d707\") " pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.324483 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb1176f0-f72d-4902-9989-8f298277d707-catalog-content\") pod \"certified-operators-qnxlq\" (UID: \"cb1176f0-f72d-4902-9989-8f298277d707\") " pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.324538 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb1176f0-f72d-4902-9989-8f298277d707-utilities\") pod \"certified-operators-qnxlq\" (UID: \"cb1176f0-f72d-4902-9989-8f298277d707\") " pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.325204 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb1176f0-f72d-4902-9989-8f298277d707-catalog-content\") pod \"certified-operators-qnxlq\" (UID: \"cb1176f0-f72d-4902-9989-8f298277d707\") " pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.325303 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb1176f0-f72d-4902-9989-8f298277d707-utilities\") pod \"certified-operators-qnxlq\" (UID: \"cb1176f0-f72d-4902-9989-8f298277d707\") " pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.349995 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqgfp\" (UniqueName: \"kubernetes.io/projected/cb1176f0-f72d-4902-9989-8f298277d707-kube-api-access-vqgfp\") pod \"certified-operators-qnxlq\" (UID: \"cb1176f0-f72d-4902-9989-8f298277d707\") " pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:30 crc kubenswrapper[4981]: I0128 15:54:30.449733 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:31 crc kubenswrapper[4981]: I0128 15:54:31.009013 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qnxlq"] Jan 28 15:54:32 crc kubenswrapper[4981]: I0128 15:54:32.022651 4981 generic.go:334] "Generic (PLEG): container finished" podID="cb1176f0-f72d-4902-9989-8f298277d707" containerID="78604add4262585eb3de3036b5a85c62ddf019f95b429a9ed115f4c41c995b13" exitCode=0 Jan 28 15:54:32 crc kubenswrapper[4981]: I0128 15:54:32.022745 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnxlq" event={"ID":"cb1176f0-f72d-4902-9989-8f298277d707","Type":"ContainerDied","Data":"78604add4262585eb3de3036b5a85c62ddf019f95b429a9ed115f4c41c995b13"} Jan 28 15:54:32 crc kubenswrapper[4981]: I0128 15:54:32.022963 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnxlq" event={"ID":"cb1176f0-f72d-4902-9989-8f298277d707","Type":"ContainerStarted","Data":"f40fcc9f3eaf40b49f28d68d2ad1c5b3f35f82edfc62564cc6cdda1abc85a424"} Jan 28 15:54:32 crc kubenswrapper[4981]: I0128 15:54:32.029458 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 15:54:33 crc kubenswrapper[4981]: I0128 15:54:33.038822 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnxlq" event={"ID":"cb1176f0-f72d-4902-9989-8f298277d707","Type":"ContainerStarted","Data":"d855deb89f18f480727ecda11fb973652e4b77cae89a81b9662efef0f15b1119"} Jan 28 15:54:34 crc kubenswrapper[4981]: I0128 15:54:34.051473 4981 generic.go:334] "Generic (PLEG): container finished" podID="cb1176f0-f72d-4902-9989-8f298277d707" containerID="d855deb89f18f480727ecda11fb973652e4b77cae89a81b9662efef0f15b1119" exitCode=0 Jan 28 15:54:34 crc kubenswrapper[4981]: I0128 15:54:34.051562 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnxlq" event={"ID":"cb1176f0-f72d-4902-9989-8f298277d707","Type":"ContainerDied","Data":"d855deb89f18f480727ecda11fb973652e4b77cae89a81b9662efef0f15b1119"} Jan 28 15:54:35 crc kubenswrapper[4981]: I0128 15:54:35.065665 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnxlq" event={"ID":"cb1176f0-f72d-4902-9989-8f298277d707","Type":"ContainerStarted","Data":"672809ce19f99fd5c8913a2c9cee33aaa2cbd9080f5ea7d5fa90efadb0d41683"} Jan 28 15:54:35 crc kubenswrapper[4981]: I0128 15:54:35.089346 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qnxlq" podStartSLOduration=2.544122051 podStartE2EDuration="5.08932455s" podCreationTimestamp="2026-01-28 15:54:30 +0000 UTC" firstStartedPulling="2026-01-28 15:54:32.029038476 +0000 UTC m=+3083.481196717" lastFinishedPulling="2026-01-28 15:54:34.574240935 +0000 UTC m=+3086.026399216" observedRunningTime="2026-01-28 15:54:35.082262764 +0000 UTC m=+3086.534421045" watchObservedRunningTime="2026-01-28 15:54:35.08932455 +0000 UTC m=+3086.541482791" Jan 28 15:54:40 crc kubenswrapper[4981]: I0128 15:54:40.450093 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:40 crc kubenswrapper[4981]: I0128 15:54:40.451018 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:40 crc kubenswrapper[4981]: I0128 15:54:40.529213 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:41 crc kubenswrapper[4981]: I0128 15:54:41.168339 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:41 crc kubenswrapper[4981]: I0128 15:54:41.226121 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qnxlq"] Jan 28 15:54:43 crc kubenswrapper[4981]: I0128 15:54:43.139408 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qnxlq" podUID="cb1176f0-f72d-4902-9989-8f298277d707" containerName="registry-server" containerID="cri-o://672809ce19f99fd5c8913a2c9cee33aaa2cbd9080f5ea7d5fa90efadb0d41683" gracePeriod=2 Jan 28 15:54:43 crc kubenswrapper[4981]: I0128 15:54:43.629650 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:43 crc kubenswrapper[4981]: I0128 15:54:43.793153 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqgfp\" (UniqueName: \"kubernetes.io/projected/cb1176f0-f72d-4902-9989-8f298277d707-kube-api-access-vqgfp\") pod \"cb1176f0-f72d-4902-9989-8f298277d707\" (UID: \"cb1176f0-f72d-4902-9989-8f298277d707\") " Jan 28 15:54:43 crc kubenswrapper[4981]: I0128 15:54:43.793626 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb1176f0-f72d-4902-9989-8f298277d707-utilities\") pod \"cb1176f0-f72d-4902-9989-8f298277d707\" (UID: \"cb1176f0-f72d-4902-9989-8f298277d707\") " Jan 28 15:54:43 crc kubenswrapper[4981]: I0128 15:54:43.793725 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb1176f0-f72d-4902-9989-8f298277d707-catalog-content\") pod \"cb1176f0-f72d-4902-9989-8f298277d707\" (UID: \"cb1176f0-f72d-4902-9989-8f298277d707\") " Jan 28 15:54:43 crc kubenswrapper[4981]: I0128 15:54:43.794542 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb1176f0-f72d-4902-9989-8f298277d707-utilities" (OuterVolumeSpecName: "utilities") pod "cb1176f0-f72d-4902-9989-8f298277d707" (UID: "cb1176f0-f72d-4902-9989-8f298277d707"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:54:43 crc kubenswrapper[4981]: I0128 15:54:43.800665 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb1176f0-f72d-4902-9989-8f298277d707-kube-api-access-vqgfp" (OuterVolumeSpecName: "kube-api-access-vqgfp") pod "cb1176f0-f72d-4902-9989-8f298277d707" (UID: "cb1176f0-f72d-4902-9989-8f298277d707"). InnerVolumeSpecName "kube-api-access-vqgfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:54:43 crc kubenswrapper[4981]: I0128 15:54:43.842885 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb1176f0-f72d-4902-9989-8f298277d707-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb1176f0-f72d-4902-9989-8f298277d707" (UID: "cb1176f0-f72d-4902-9989-8f298277d707"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:54:43 crc kubenswrapper[4981]: I0128 15:54:43.896790 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb1176f0-f72d-4902-9989-8f298277d707-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:43 crc kubenswrapper[4981]: I0128 15:54:43.896828 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb1176f0-f72d-4902-9989-8f298277d707-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:43 crc kubenswrapper[4981]: I0128 15:54:43.896849 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqgfp\" (UniqueName: \"kubernetes.io/projected/cb1176f0-f72d-4902-9989-8f298277d707-kube-api-access-vqgfp\") on node \"crc\" DevicePath \"\"" Jan 28 15:54:44 crc kubenswrapper[4981]: I0128 15:54:44.155017 4981 generic.go:334] "Generic (PLEG): container finished" podID="cb1176f0-f72d-4902-9989-8f298277d707" containerID="672809ce19f99fd5c8913a2c9cee33aaa2cbd9080f5ea7d5fa90efadb0d41683" exitCode=0 Jan 28 15:54:44 crc kubenswrapper[4981]: I0128 15:54:44.155068 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnxlq" event={"ID":"cb1176f0-f72d-4902-9989-8f298277d707","Type":"ContainerDied","Data":"672809ce19f99fd5c8913a2c9cee33aaa2cbd9080f5ea7d5fa90efadb0d41683"} Jan 28 15:54:44 crc kubenswrapper[4981]: I0128 15:54:44.155104 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qnxlq" event={"ID":"cb1176f0-f72d-4902-9989-8f298277d707","Type":"ContainerDied","Data":"f40fcc9f3eaf40b49f28d68d2ad1c5b3f35f82edfc62564cc6cdda1abc85a424"} Jan 28 15:54:44 crc kubenswrapper[4981]: I0128 15:54:44.155129 4981 scope.go:117] "RemoveContainer" containerID="672809ce19f99fd5c8913a2c9cee33aaa2cbd9080f5ea7d5fa90efadb0d41683" Jan 28 15:54:44 crc kubenswrapper[4981]: I0128 15:54:44.155127 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qnxlq" Jan 28 15:54:44 crc kubenswrapper[4981]: I0128 15:54:44.183689 4981 scope.go:117] "RemoveContainer" containerID="d855deb89f18f480727ecda11fb973652e4b77cae89a81b9662efef0f15b1119" Jan 28 15:54:44 crc kubenswrapper[4981]: I0128 15:54:44.208495 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qnxlq"] Jan 28 15:54:44 crc kubenswrapper[4981]: I0128 15:54:44.215350 4981 scope.go:117] "RemoveContainer" containerID="78604add4262585eb3de3036b5a85c62ddf019f95b429a9ed115f4c41c995b13" Jan 28 15:54:44 crc kubenswrapper[4981]: I0128 15:54:44.221584 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qnxlq"] Jan 28 15:54:44 crc kubenswrapper[4981]: I0128 15:54:44.259084 4981 scope.go:117] "RemoveContainer" containerID="672809ce19f99fd5c8913a2c9cee33aaa2cbd9080f5ea7d5fa90efadb0d41683" Jan 28 15:54:44 crc kubenswrapper[4981]: E0128 15:54:44.259682 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"672809ce19f99fd5c8913a2c9cee33aaa2cbd9080f5ea7d5fa90efadb0d41683\": container with ID starting with 672809ce19f99fd5c8913a2c9cee33aaa2cbd9080f5ea7d5fa90efadb0d41683 not found: ID does not exist" containerID="672809ce19f99fd5c8913a2c9cee33aaa2cbd9080f5ea7d5fa90efadb0d41683" Jan 28 15:54:44 crc kubenswrapper[4981]: I0128 15:54:44.259735 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"672809ce19f99fd5c8913a2c9cee33aaa2cbd9080f5ea7d5fa90efadb0d41683"} err="failed to get container status \"672809ce19f99fd5c8913a2c9cee33aaa2cbd9080f5ea7d5fa90efadb0d41683\": rpc error: code = NotFound desc = could not find container \"672809ce19f99fd5c8913a2c9cee33aaa2cbd9080f5ea7d5fa90efadb0d41683\": container with ID starting with 672809ce19f99fd5c8913a2c9cee33aaa2cbd9080f5ea7d5fa90efadb0d41683 not found: ID does not exist" Jan 28 15:54:44 crc kubenswrapper[4981]: I0128 15:54:44.259763 4981 scope.go:117] "RemoveContainer" containerID="d855deb89f18f480727ecda11fb973652e4b77cae89a81b9662efef0f15b1119" Jan 28 15:54:44 crc kubenswrapper[4981]: E0128 15:54:44.260175 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d855deb89f18f480727ecda11fb973652e4b77cae89a81b9662efef0f15b1119\": container with ID starting with d855deb89f18f480727ecda11fb973652e4b77cae89a81b9662efef0f15b1119 not found: ID does not exist" containerID="d855deb89f18f480727ecda11fb973652e4b77cae89a81b9662efef0f15b1119" Jan 28 15:54:44 crc kubenswrapper[4981]: I0128 15:54:44.260257 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d855deb89f18f480727ecda11fb973652e4b77cae89a81b9662efef0f15b1119"} err="failed to get container status \"d855deb89f18f480727ecda11fb973652e4b77cae89a81b9662efef0f15b1119\": rpc error: code = NotFound desc = could not find container \"d855deb89f18f480727ecda11fb973652e4b77cae89a81b9662efef0f15b1119\": container with ID starting with d855deb89f18f480727ecda11fb973652e4b77cae89a81b9662efef0f15b1119 not found: ID does not exist" Jan 28 15:54:44 crc kubenswrapper[4981]: I0128 15:54:44.260271 4981 scope.go:117] "RemoveContainer" containerID="78604add4262585eb3de3036b5a85c62ddf019f95b429a9ed115f4c41c995b13" Jan 28 15:54:44 crc kubenswrapper[4981]: E0128 15:54:44.260575 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78604add4262585eb3de3036b5a85c62ddf019f95b429a9ed115f4c41c995b13\": container with ID starting with 78604add4262585eb3de3036b5a85c62ddf019f95b429a9ed115f4c41c995b13 not found: ID does not exist" containerID="78604add4262585eb3de3036b5a85c62ddf019f95b429a9ed115f4c41c995b13" Jan 28 15:54:44 crc kubenswrapper[4981]: I0128 15:54:44.260599 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78604add4262585eb3de3036b5a85c62ddf019f95b429a9ed115f4c41c995b13"} err="failed to get container status \"78604add4262585eb3de3036b5a85c62ddf019f95b429a9ed115f4c41c995b13\": rpc error: code = NotFound desc = could not find container \"78604add4262585eb3de3036b5a85c62ddf019f95b429a9ed115f4c41c995b13\": container with ID starting with 78604add4262585eb3de3036b5a85c62ddf019f95b429a9ed115f4c41c995b13 not found: ID does not exist" Jan 28 15:54:45 crc kubenswrapper[4981]: I0128 15:54:45.332167 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb1176f0-f72d-4902-9989-8f298277d707" path="/var/lib/kubelet/pods/cb1176f0-f72d-4902-9989-8f298277d707/volumes" Jan 28 15:54:49 crc kubenswrapper[4981]: I0128 15:54:49.898276 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:54:49 crc kubenswrapper[4981]: I0128 15:54:49.899102 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:55:19 crc kubenswrapper[4981]: I0128 15:55:19.897627 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 15:55:19 crc kubenswrapper[4981]: I0128 15:55:19.898142 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 15:55:19 crc kubenswrapper[4981]: I0128 15:55:19.898214 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 15:55:19 crc kubenswrapper[4981]: I0128 15:55:19.898792 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 15:55:19 crc kubenswrapper[4981]: I0128 15:55:19.898862 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" gracePeriod=600 Jan 28 15:55:20 crc kubenswrapper[4981]: E0128 15:55:20.023656 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:55:20 crc kubenswrapper[4981]: I0128 15:55:20.523488 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" exitCode=0 Jan 28 15:55:20 crc kubenswrapper[4981]: I0128 15:55:20.523534 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8"} Jan 28 15:55:20 crc kubenswrapper[4981]: I0128 15:55:20.523567 4981 scope.go:117] "RemoveContainer" containerID="b400ebc18b32527d4a290e49d092858c1b4201e6d520ad3ffd416135ae56e45e" Jan 28 15:55:20 crc kubenswrapper[4981]: I0128 15:55:20.524285 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:55:20 crc kubenswrapper[4981]: E0128 15:55:20.524528 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:55:35 crc kubenswrapper[4981]: I0128 15:55:35.318976 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:55:35 crc kubenswrapper[4981]: E0128 15:55:35.319822 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:55:46 crc kubenswrapper[4981]: I0128 15:55:46.319093 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:55:46 crc kubenswrapper[4981]: E0128 15:55:46.319900 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:56:00 crc kubenswrapper[4981]: I0128 15:56:00.318661 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:56:00 crc kubenswrapper[4981]: E0128 15:56:00.319630 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:56:11 crc kubenswrapper[4981]: I0128 15:56:11.319073 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:56:11 crc kubenswrapper[4981]: E0128 15:56:11.319816 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:56:26 crc kubenswrapper[4981]: I0128 15:56:26.318351 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:56:26 crc kubenswrapper[4981]: E0128 15:56:26.319991 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:56:41 crc kubenswrapper[4981]: I0128 15:56:41.318752 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:56:41 crc kubenswrapper[4981]: E0128 15:56:41.319466 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:56:52 crc kubenswrapper[4981]: I0128 15:56:52.318603 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:56:52 crc kubenswrapper[4981]: E0128 15:56:52.319396 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:57:06 crc kubenswrapper[4981]: I0128 15:57:06.319137 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:57:06 crc kubenswrapper[4981]: E0128 15:57:06.319953 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.029574 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wl8c9"] Jan 28 15:57:17 crc kubenswrapper[4981]: E0128 15:57:17.030518 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb1176f0-f72d-4902-9989-8f298277d707" containerName="extract-content" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.030537 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb1176f0-f72d-4902-9989-8f298277d707" containerName="extract-content" Jan 28 15:57:17 crc kubenswrapper[4981]: E0128 15:57:17.030574 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb1176f0-f72d-4902-9989-8f298277d707" containerName="extract-utilities" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.030586 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb1176f0-f72d-4902-9989-8f298277d707" containerName="extract-utilities" Jan 28 15:57:17 crc kubenswrapper[4981]: E0128 15:57:17.030605 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb1176f0-f72d-4902-9989-8f298277d707" containerName="registry-server" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.030613 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb1176f0-f72d-4902-9989-8f298277d707" containerName="registry-server" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.030945 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb1176f0-f72d-4902-9989-8f298277d707" containerName="registry-server" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.032644 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wl8c9" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.042292 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wl8c9"] Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.202304 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6175e49-f0fc-41d3-b750-e9e1b6cbca02-catalog-content\") pod \"redhat-operators-wl8c9\" (UID: \"f6175e49-f0fc-41d3-b750-e9e1b6cbca02\") " pod="openshift-marketplace/redhat-operators-wl8c9" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.202618 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msbhg\" (UniqueName: \"kubernetes.io/projected/f6175e49-f0fc-41d3-b750-e9e1b6cbca02-kube-api-access-msbhg\") pod \"redhat-operators-wl8c9\" (UID: \"f6175e49-f0fc-41d3-b750-e9e1b6cbca02\") " pod="openshift-marketplace/redhat-operators-wl8c9" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.202675 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6175e49-f0fc-41d3-b750-e9e1b6cbca02-utilities\") pod \"redhat-operators-wl8c9\" (UID: \"f6175e49-f0fc-41d3-b750-e9e1b6cbca02\") " pod="openshift-marketplace/redhat-operators-wl8c9" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.304569 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6175e49-f0fc-41d3-b750-e9e1b6cbca02-utilities\") pod \"redhat-operators-wl8c9\" (UID: \"f6175e49-f0fc-41d3-b750-e9e1b6cbca02\") " pod="openshift-marketplace/redhat-operators-wl8c9" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.304778 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6175e49-f0fc-41d3-b750-e9e1b6cbca02-catalog-content\") pod \"redhat-operators-wl8c9\" (UID: \"f6175e49-f0fc-41d3-b750-e9e1b6cbca02\") " pod="openshift-marketplace/redhat-operators-wl8c9" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.304805 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msbhg\" (UniqueName: \"kubernetes.io/projected/f6175e49-f0fc-41d3-b750-e9e1b6cbca02-kube-api-access-msbhg\") pod \"redhat-operators-wl8c9\" (UID: \"f6175e49-f0fc-41d3-b750-e9e1b6cbca02\") " pod="openshift-marketplace/redhat-operators-wl8c9" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.305057 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6175e49-f0fc-41d3-b750-e9e1b6cbca02-utilities\") pod \"redhat-operators-wl8c9\" (UID: \"f6175e49-f0fc-41d3-b750-e9e1b6cbca02\") " pod="openshift-marketplace/redhat-operators-wl8c9" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.305256 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6175e49-f0fc-41d3-b750-e9e1b6cbca02-catalog-content\") pod \"redhat-operators-wl8c9\" (UID: \"f6175e49-f0fc-41d3-b750-e9e1b6cbca02\") " pod="openshift-marketplace/redhat-operators-wl8c9" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.338437 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msbhg\" (UniqueName: \"kubernetes.io/projected/f6175e49-f0fc-41d3-b750-e9e1b6cbca02-kube-api-access-msbhg\") pod \"redhat-operators-wl8c9\" (UID: \"f6175e49-f0fc-41d3-b750-e9e1b6cbca02\") " pod="openshift-marketplace/redhat-operators-wl8c9" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.392712 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wl8c9" Jan 28 15:57:17 crc kubenswrapper[4981]: I0128 15:57:17.905674 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wl8c9"] Jan 28 15:57:18 crc kubenswrapper[4981]: I0128 15:57:18.654829 4981 generic.go:334] "Generic (PLEG): container finished" podID="f6175e49-f0fc-41d3-b750-e9e1b6cbca02" containerID="73f1932615dd87f9d8d38edb7095f0cda29d8f78b2eb7dd34e19d801d715c9a1" exitCode=0 Jan 28 15:57:18 crc kubenswrapper[4981]: I0128 15:57:18.654878 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wl8c9" event={"ID":"f6175e49-f0fc-41d3-b750-e9e1b6cbca02","Type":"ContainerDied","Data":"73f1932615dd87f9d8d38edb7095f0cda29d8f78b2eb7dd34e19d801d715c9a1"} Jan 28 15:57:18 crc kubenswrapper[4981]: I0128 15:57:18.655119 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wl8c9" event={"ID":"f6175e49-f0fc-41d3-b750-e9e1b6cbca02","Type":"ContainerStarted","Data":"3dcfb85b60c4f06baff32731b4e63608c050140fdaefa15a406bbcc01f48dd15"} Jan 28 15:57:19 crc kubenswrapper[4981]: I0128 15:57:19.329523 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:57:19 crc kubenswrapper[4981]: E0128 15:57:19.329834 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:57:27 crc kubenswrapper[4981]: I0128 15:57:27.769090 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wl8c9" event={"ID":"f6175e49-f0fc-41d3-b750-e9e1b6cbca02","Type":"ContainerStarted","Data":"2e5989d0ec49c3727e1237879f78f02119f992903fc44d76d6e4d626bb7abb11"} Jan 28 15:57:30 crc kubenswrapper[4981]: I0128 15:57:30.318446 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:57:30 crc kubenswrapper[4981]: E0128 15:57:30.319890 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:57:34 crc kubenswrapper[4981]: I0128 15:57:34.841304 4981 generic.go:334] "Generic (PLEG): container finished" podID="f6175e49-f0fc-41d3-b750-e9e1b6cbca02" containerID="2e5989d0ec49c3727e1237879f78f02119f992903fc44d76d6e4d626bb7abb11" exitCode=0 Jan 28 15:57:34 crc kubenswrapper[4981]: I0128 15:57:34.841421 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wl8c9" event={"ID":"f6175e49-f0fc-41d3-b750-e9e1b6cbca02","Type":"ContainerDied","Data":"2e5989d0ec49c3727e1237879f78f02119f992903fc44d76d6e4d626bb7abb11"} Jan 28 15:57:37 crc kubenswrapper[4981]: I0128 15:57:37.873817 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wl8c9" event={"ID":"f6175e49-f0fc-41d3-b750-e9e1b6cbca02","Type":"ContainerStarted","Data":"a6ff850ab563440fc1177a4b84695e3b5e8c7dd31279a57cbf06a29ac501d8e5"} Jan 28 15:57:37 crc kubenswrapper[4981]: I0128 15:57:37.897410 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wl8c9" podStartSLOduration=3.209239461 podStartE2EDuration="20.897388591s" podCreationTimestamp="2026-01-28 15:57:17 +0000 UTC" firstStartedPulling="2026-01-28 15:57:18.656314097 +0000 UTC m=+3250.108472328" lastFinishedPulling="2026-01-28 15:57:36.344463217 +0000 UTC m=+3267.796621458" observedRunningTime="2026-01-28 15:57:37.894374711 +0000 UTC m=+3269.346532992" watchObservedRunningTime="2026-01-28 15:57:37.897388591 +0000 UTC m=+3269.349546842" Jan 28 15:57:43 crc kubenswrapper[4981]: I0128 15:57:43.319667 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:57:43 crc kubenswrapper[4981]: E0128 15:57:43.320546 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:57:47 crc kubenswrapper[4981]: I0128 15:57:47.392966 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wl8c9" Jan 28 15:57:47 crc kubenswrapper[4981]: I0128 15:57:47.393481 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wl8c9" Jan 28 15:57:48 crc kubenswrapper[4981]: I0128 15:57:48.446555 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wl8c9" podUID="f6175e49-f0fc-41d3-b750-e9e1b6cbca02" containerName="registry-server" probeResult="failure" output=< Jan 28 15:57:48 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Jan 28 15:57:48 crc kubenswrapper[4981]: > Jan 28 15:57:54 crc kubenswrapper[4981]: I0128 15:57:54.319422 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:57:54 crc kubenswrapper[4981]: E0128 15:57:54.320449 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:57:58 crc kubenswrapper[4981]: I0128 15:57:58.449752 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wl8c9" podUID="f6175e49-f0fc-41d3-b750-e9e1b6cbca02" containerName="registry-server" probeResult="failure" output=< Jan 28 15:57:58 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Jan 28 15:57:58 crc kubenswrapper[4981]: > Jan 28 15:58:08 crc kubenswrapper[4981]: I0128 15:58:08.319466 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:58:08 crc kubenswrapper[4981]: E0128 15:58:08.320209 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:58:08 crc kubenswrapper[4981]: I0128 15:58:08.437507 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wl8c9" podUID="f6175e49-f0fc-41d3-b750-e9e1b6cbca02" containerName="registry-server" probeResult="failure" output=< Jan 28 15:58:08 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Jan 28 15:58:08 crc kubenswrapper[4981]: > Jan 28 15:58:18 crc kubenswrapper[4981]: I0128 15:58:18.452680 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wl8c9" podUID="f6175e49-f0fc-41d3-b750-e9e1b6cbca02" containerName="registry-server" probeResult="failure" output=< Jan 28 15:58:18 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Jan 28 15:58:18 crc kubenswrapper[4981]: > Jan 28 15:58:19 crc kubenswrapper[4981]: I0128 15:58:19.323533 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:58:19 crc kubenswrapper[4981]: E0128 15:58:19.323864 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:58:27 crc kubenswrapper[4981]: I0128 15:58:27.484269 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wl8c9" Jan 28 15:58:27 crc kubenswrapper[4981]: I0128 15:58:27.543672 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wl8c9" Jan 28 15:58:27 crc kubenswrapper[4981]: I0128 15:58:27.629610 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wl8c9"] Jan 28 15:58:27 crc kubenswrapper[4981]: I0128 15:58:27.743593 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-79kcx"] Jan 28 15:58:27 crc kubenswrapper[4981]: I0128 15:58:27.743835 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-79kcx" podUID="4c7bbc77-2c0d-4685-8698-4244f09ca0f3" containerName="registry-server" containerID="cri-o://965dafa183c949e5730e57b45732b3967580a0840168912f1310e235ab73e9d5" gracePeriod=2 Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.185452 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.304811 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptbfr\" (UniqueName: \"kubernetes.io/projected/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-kube-api-access-ptbfr\") pod \"4c7bbc77-2c0d-4685-8698-4244f09ca0f3\" (UID: \"4c7bbc77-2c0d-4685-8698-4244f09ca0f3\") " Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.304890 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-utilities\") pod \"4c7bbc77-2c0d-4685-8698-4244f09ca0f3\" (UID: \"4c7bbc77-2c0d-4685-8698-4244f09ca0f3\") " Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.305299 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-catalog-content\") pod \"4c7bbc77-2c0d-4685-8698-4244f09ca0f3\" (UID: \"4c7bbc77-2c0d-4685-8698-4244f09ca0f3\") " Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.310249 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-utilities" (OuterVolumeSpecName: "utilities") pod "4c7bbc77-2c0d-4685-8698-4244f09ca0f3" (UID: "4c7bbc77-2c0d-4685-8698-4244f09ca0f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.324564 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-kube-api-access-ptbfr" (OuterVolumeSpecName: "kube-api-access-ptbfr") pod "4c7bbc77-2c0d-4685-8698-4244f09ca0f3" (UID: "4c7bbc77-2c0d-4685-8698-4244f09ca0f3"). InnerVolumeSpecName "kube-api-access-ptbfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.369162 4981 generic.go:334] "Generic (PLEG): container finished" podID="4c7bbc77-2c0d-4685-8698-4244f09ca0f3" containerID="965dafa183c949e5730e57b45732b3967580a0840168912f1310e235ab73e9d5" exitCode=0 Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.370202 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-79kcx" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.370711 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79kcx" event={"ID":"4c7bbc77-2c0d-4685-8698-4244f09ca0f3","Type":"ContainerDied","Data":"965dafa183c949e5730e57b45732b3967580a0840168912f1310e235ab73e9d5"} Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.370744 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-79kcx" event={"ID":"4c7bbc77-2c0d-4685-8698-4244f09ca0f3","Type":"ContainerDied","Data":"b7382b4753db506011b0bebe1d5be333d32335a26a382e2ae6b8baa92eaca4fd"} Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.370763 4981 scope.go:117] "RemoveContainer" containerID="965dafa183c949e5730e57b45732b3967580a0840168912f1310e235ab73e9d5" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.407553 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptbfr\" (UniqueName: \"kubernetes.io/projected/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-kube-api-access-ptbfr\") on node \"crc\" DevicePath \"\"" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.407857 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.437478 4981 scope.go:117] "RemoveContainer" containerID="204c2711dd6e54d763554ee48118d43fda5b3a5d12f6031f1456c69634085ec2" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.459267 4981 scope.go:117] "RemoveContainer" containerID="252034e9a84529bb750c481cd90ca8f2318f41a795d4480b6926c2c13baf86b3" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.511324 4981 scope.go:117] "RemoveContainer" containerID="965dafa183c949e5730e57b45732b3967580a0840168912f1310e235ab73e9d5" Jan 28 15:58:28 crc kubenswrapper[4981]: E0128 15:58:28.511751 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"965dafa183c949e5730e57b45732b3967580a0840168912f1310e235ab73e9d5\": container with ID starting with 965dafa183c949e5730e57b45732b3967580a0840168912f1310e235ab73e9d5 not found: ID does not exist" containerID="965dafa183c949e5730e57b45732b3967580a0840168912f1310e235ab73e9d5" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.511783 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"965dafa183c949e5730e57b45732b3967580a0840168912f1310e235ab73e9d5"} err="failed to get container status \"965dafa183c949e5730e57b45732b3967580a0840168912f1310e235ab73e9d5\": rpc error: code = NotFound desc = could not find container \"965dafa183c949e5730e57b45732b3967580a0840168912f1310e235ab73e9d5\": container with ID starting with 965dafa183c949e5730e57b45732b3967580a0840168912f1310e235ab73e9d5 not found: ID does not exist" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.511807 4981 scope.go:117] "RemoveContainer" containerID="204c2711dd6e54d763554ee48118d43fda5b3a5d12f6031f1456c69634085ec2" Jan 28 15:58:28 crc kubenswrapper[4981]: E0128 15:58:28.512115 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"204c2711dd6e54d763554ee48118d43fda5b3a5d12f6031f1456c69634085ec2\": container with ID starting with 204c2711dd6e54d763554ee48118d43fda5b3a5d12f6031f1456c69634085ec2 not found: ID does not exist" containerID="204c2711dd6e54d763554ee48118d43fda5b3a5d12f6031f1456c69634085ec2" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.512146 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"204c2711dd6e54d763554ee48118d43fda5b3a5d12f6031f1456c69634085ec2"} err="failed to get container status \"204c2711dd6e54d763554ee48118d43fda5b3a5d12f6031f1456c69634085ec2\": rpc error: code = NotFound desc = could not find container \"204c2711dd6e54d763554ee48118d43fda5b3a5d12f6031f1456c69634085ec2\": container with ID starting with 204c2711dd6e54d763554ee48118d43fda5b3a5d12f6031f1456c69634085ec2 not found: ID does not exist" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.512167 4981 scope.go:117] "RemoveContainer" containerID="252034e9a84529bb750c481cd90ca8f2318f41a795d4480b6926c2c13baf86b3" Jan 28 15:58:28 crc kubenswrapper[4981]: E0128 15:58:28.512420 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"252034e9a84529bb750c481cd90ca8f2318f41a795d4480b6926c2c13baf86b3\": container with ID starting with 252034e9a84529bb750c481cd90ca8f2318f41a795d4480b6926c2c13baf86b3 not found: ID does not exist" containerID="252034e9a84529bb750c481cd90ca8f2318f41a795d4480b6926c2c13baf86b3" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.512444 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"252034e9a84529bb750c481cd90ca8f2318f41a795d4480b6926c2c13baf86b3"} err="failed to get container status \"252034e9a84529bb750c481cd90ca8f2318f41a795d4480b6926c2c13baf86b3\": rpc error: code = NotFound desc = could not find container \"252034e9a84529bb750c481cd90ca8f2318f41a795d4480b6926c2c13baf86b3\": container with ID starting with 252034e9a84529bb750c481cd90ca8f2318f41a795d4480b6926c2c13baf86b3 not found: ID does not exist" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.569591 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c7bbc77-2c0d-4685-8698-4244f09ca0f3" (UID: "4c7bbc77-2c0d-4685-8698-4244f09ca0f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.612083 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c7bbc77-2c0d-4685-8698-4244f09ca0f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.709007 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-79kcx"] Jan 28 15:58:28 crc kubenswrapper[4981]: I0128 15:58:28.717819 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-79kcx"] Jan 28 15:58:29 crc kubenswrapper[4981]: I0128 15:58:29.329116 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c7bbc77-2c0d-4685-8698-4244f09ca0f3" path="/var/lib/kubelet/pods/4c7bbc77-2c0d-4685-8698-4244f09ca0f3/volumes" Jan 28 15:58:31 crc kubenswrapper[4981]: I0128 15:58:31.319546 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:58:31 crc kubenswrapper[4981]: E0128 15:58:31.320661 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:58:46 crc kubenswrapper[4981]: I0128 15:58:46.318896 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:58:46 crc kubenswrapper[4981]: E0128 15:58:46.319826 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:58:59 crc kubenswrapper[4981]: I0128 15:58:59.333006 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:58:59 crc kubenswrapper[4981]: E0128 15:58:59.333989 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:59:14 crc kubenswrapper[4981]: I0128 15:59:14.319530 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:59:14 crc kubenswrapper[4981]: E0128 15:59:14.320498 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:59:27 crc kubenswrapper[4981]: I0128 15:59:27.319374 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:59:27 crc kubenswrapper[4981]: E0128 15:59:27.321349 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:59:41 crc kubenswrapper[4981]: I0128 15:59:41.319422 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:59:41 crc kubenswrapper[4981]: E0128 15:59:41.321920 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 15:59:55 crc kubenswrapper[4981]: I0128 15:59:55.318977 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 15:59:55 crc kubenswrapper[4981]: E0128 15:59:55.320201 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.169828 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq"] Jan 28 16:00:00 crc kubenswrapper[4981]: E0128 16:00:00.170820 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c7bbc77-2c0d-4685-8698-4244f09ca0f3" containerName="registry-server" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.170843 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c7bbc77-2c0d-4685-8698-4244f09ca0f3" containerName="registry-server" Jan 28 16:00:00 crc kubenswrapper[4981]: E0128 16:00:00.170876 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c7bbc77-2c0d-4685-8698-4244f09ca0f3" containerName="extract-content" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.170887 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c7bbc77-2c0d-4685-8698-4244f09ca0f3" containerName="extract-content" Jan 28 16:00:00 crc kubenswrapper[4981]: E0128 16:00:00.170906 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c7bbc77-2c0d-4685-8698-4244f09ca0f3" containerName="extract-utilities" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.170918 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c7bbc77-2c0d-4685-8698-4244f09ca0f3" containerName="extract-utilities" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.171288 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c7bbc77-2c0d-4685-8698-4244f09ca0f3" containerName="registry-server" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.172374 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.174862 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.181112 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq"] Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.181622 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.200977 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91d9fb93-b51e-42b2-a144-c6d10be69220-config-volume\") pod \"collect-profiles-29493600-gxldq\" (UID: \"91d9fb93-b51e-42b2-a144-c6d10be69220\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.201132 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91d9fb93-b51e-42b2-a144-c6d10be69220-secret-volume\") pod \"collect-profiles-29493600-gxldq\" (UID: \"91d9fb93-b51e-42b2-a144-c6d10be69220\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.201299 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jq7s\" (UniqueName: \"kubernetes.io/projected/91d9fb93-b51e-42b2-a144-c6d10be69220-kube-api-access-7jq7s\") pod \"collect-profiles-29493600-gxldq\" (UID: \"91d9fb93-b51e-42b2-a144-c6d10be69220\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.302957 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jq7s\" (UniqueName: \"kubernetes.io/projected/91d9fb93-b51e-42b2-a144-c6d10be69220-kube-api-access-7jq7s\") pod \"collect-profiles-29493600-gxldq\" (UID: \"91d9fb93-b51e-42b2-a144-c6d10be69220\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.303073 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91d9fb93-b51e-42b2-a144-c6d10be69220-config-volume\") pod \"collect-profiles-29493600-gxldq\" (UID: \"91d9fb93-b51e-42b2-a144-c6d10be69220\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.303218 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91d9fb93-b51e-42b2-a144-c6d10be69220-secret-volume\") pod \"collect-profiles-29493600-gxldq\" (UID: \"91d9fb93-b51e-42b2-a144-c6d10be69220\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.304219 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91d9fb93-b51e-42b2-a144-c6d10be69220-config-volume\") pod \"collect-profiles-29493600-gxldq\" (UID: \"91d9fb93-b51e-42b2-a144-c6d10be69220\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.310836 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91d9fb93-b51e-42b2-a144-c6d10be69220-secret-volume\") pod \"collect-profiles-29493600-gxldq\" (UID: \"91d9fb93-b51e-42b2-a144-c6d10be69220\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.321099 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jq7s\" (UniqueName: \"kubernetes.io/projected/91d9fb93-b51e-42b2-a144-c6d10be69220-kube-api-access-7jq7s\") pod \"collect-profiles-29493600-gxldq\" (UID: \"91d9fb93-b51e-42b2-a144-c6d10be69220\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.511499 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" Jan 28 16:00:00 crc kubenswrapper[4981]: I0128 16:00:00.973040 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq"] Jan 28 16:00:01 crc kubenswrapper[4981]: I0128 16:00:01.269284 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" event={"ID":"91d9fb93-b51e-42b2-a144-c6d10be69220","Type":"ContainerStarted","Data":"58e1abfc540cd677526b5ec4a040396a966f477049ddc896157a521b8c8161d8"} Jan 28 16:00:01 crc kubenswrapper[4981]: I0128 16:00:01.269330 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" event={"ID":"91d9fb93-b51e-42b2-a144-c6d10be69220","Type":"ContainerStarted","Data":"11c0c3713c0b2077bc8c118d04bf21e7ff132808e59cb6f4f30816828d6e80f3"} Jan 28 16:00:01 crc kubenswrapper[4981]: I0128 16:00:01.307898 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" podStartSLOduration=1.307869111 podStartE2EDuration="1.307869111s" podCreationTimestamp="2026-01-28 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:00:01.288398562 +0000 UTC m=+3412.740556823" watchObservedRunningTime="2026-01-28 16:00:01.307869111 +0000 UTC m=+3412.760027372" Jan 28 16:00:02 crc kubenswrapper[4981]: I0128 16:00:02.283897 4981 generic.go:334] "Generic (PLEG): container finished" podID="91d9fb93-b51e-42b2-a144-c6d10be69220" containerID="58e1abfc540cd677526b5ec4a040396a966f477049ddc896157a521b8c8161d8" exitCode=0 Jan 28 16:00:02 crc kubenswrapper[4981]: I0128 16:00:02.283966 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" event={"ID":"91d9fb93-b51e-42b2-a144-c6d10be69220","Type":"ContainerDied","Data":"58e1abfc540cd677526b5ec4a040396a966f477049ddc896157a521b8c8161d8"} Jan 28 16:00:03 crc kubenswrapper[4981]: I0128 16:00:03.668869 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" Jan 28 16:00:03 crc kubenswrapper[4981]: I0128 16:00:03.674533 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91d9fb93-b51e-42b2-a144-c6d10be69220-secret-volume\") pod \"91d9fb93-b51e-42b2-a144-c6d10be69220\" (UID: \"91d9fb93-b51e-42b2-a144-c6d10be69220\") " Jan 28 16:00:03 crc kubenswrapper[4981]: I0128 16:00:03.674662 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jq7s\" (UniqueName: \"kubernetes.io/projected/91d9fb93-b51e-42b2-a144-c6d10be69220-kube-api-access-7jq7s\") pod \"91d9fb93-b51e-42b2-a144-c6d10be69220\" (UID: \"91d9fb93-b51e-42b2-a144-c6d10be69220\") " Jan 28 16:00:03 crc kubenswrapper[4981]: I0128 16:00:03.674785 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91d9fb93-b51e-42b2-a144-c6d10be69220-config-volume\") pod \"91d9fb93-b51e-42b2-a144-c6d10be69220\" (UID: \"91d9fb93-b51e-42b2-a144-c6d10be69220\") " Jan 28 16:00:03 crc kubenswrapper[4981]: I0128 16:00:03.675305 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91d9fb93-b51e-42b2-a144-c6d10be69220-config-volume" (OuterVolumeSpecName: "config-volume") pod "91d9fb93-b51e-42b2-a144-c6d10be69220" (UID: "91d9fb93-b51e-42b2-a144-c6d10be69220"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:00:03 crc kubenswrapper[4981]: I0128 16:00:03.681090 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91d9fb93-b51e-42b2-a144-c6d10be69220-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "91d9fb93-b51e-42b2-a144-c6d10be69220" (UID: "91d9fb93-b51e-42b2-a144-c6d10be69220"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:00:03 crc kubenswrapper[4981]: I0128 16:00:03.681566 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91d9fb93-b51e-42b2-a144-c6d10be69220-kube-api-access-7jq7s" (OuterVolumeSpecName: "kube-api-access-7jq7s") pod "91d9fb93-b51e-42b2-a144-c6d10be69220" (UID: "91d9fb93-b51e-42b2-a144-c6d10be69220"). InnerVolumeSpecName "kube-api-access-7jq7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:00:03 crc kubenswrapper[4981]: I0128 16:00:03.776889 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jq7s\" (UniqueName: \"kubernetes.io/projected/91d9fb93-b51e-42b2-a144-c6d10be69220-kube-api-access-7jq7s\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:03 crc kubenswrapper[4981]: I0128 16:00:03.776919 4981 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91d9fb93-b51e-42b2-a144-c6d10be69220-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:03 crc kubenswrapper[4981]: I0128 16:00:03.776929 4981 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91d9fb93-b51e-42b2-a144-c6d10be69220-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:00:04 crc kubenswrapper[4981]: I0128 16:00:04.314635 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" Jan 28 16:00:04 crc kubenswrapper[4981]: I0128 16:00:04.314557 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493600-gxldq" event={"ID":"91d9fb93-b51e-42b2-a144-c6d10be69220","Type":"ContainerDied","Data":"11c0c3713c0b2077bc8c118d04bf21e7ff132808e59cb6f4f30816828d6e80f3"} Jan 28 16:00:04 crc kubenswrapper[4981]: I0128 16:00:04.314844 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11c0c3713c0b2077bc8c118d04bf21e7ff132808e59cb6f4f30816828d6e80f3" Jan 28 16:00:04 crc kubenswrapper[4981]: I0128 16:00:04.373588 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99"] Jan 28 16:00:04 crc kubenswrapper[4981]: I0128 16:00:04.381004 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493555-x5v99"] Jan 28 16:00:05 crc kubenswrapper[4981]: I0128 16:00:05.344231 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2df18abc-5d9b-447b-997c-2e60e4f85bad" path="/var/lib/kubelet/pods/2df18abc-5d9b-447b-997c-2e60e4f85bad/volumes" Jan 28 16:00:08 crc kubenswrapper[4981]: I0128 16:00:08.320120 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 16:00:08 crc kubenswrapper[4981]: E0128 16:00:08.320922 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:00:22 crc kubenswrapper[4981]: I0128 16:00:22.319162 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 16:00:23 crc kubenswrapper[4981]: I0128 16:00:23.489486 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"39e0a59a118d98c8222e77fa5717ab9ada8940cd17d88d2684c70c105b80474d"} Jan 28 16:00:34 crc kubenswrapper[4981]: I0128 16:00:34.046585 4981 scope.go:117] "RemoveContainer" containerID="a648c4db1087e59e7c8821bce4e290ab7934c2d7f20949c2e830960e8beb5783" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.150127 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29493601-rsfpp"] Jan 28 16:01:00 crc kubenswrapper[4981]: E0128 16:01:00.151177 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91d9fb93-b51e-42b2-a144-c6d10be69220" containerName="collect-profiles" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.151214 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="91d9fb93-b51e-42b2-a144-c6d10be69220" containerName="collect-profiles" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.151511 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="91d9fb93-b51e-42b2-a144-c6d10be69220" containerName="collect-profiles" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.152299 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.181112 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29493601-rsfpp"] Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.259858 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-fernet-keys\") pod \"keystone-cron-29493601-rsfpp\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.259977 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-combined-ca-bundle\") pod \"keystone-cron-29493601-rsfpp\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.260031 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbkhv\" (UniqueName: \"kubernetes.io/projected/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-kube-api-access-fbkhv\") pod \"keystone-cron-29493601-rsfpp\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.260161 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-config-data\") pod \"keystone-cron-29493601-rsfpp\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.362137 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-fernet-keys\") pod \"keystone-cron-29493601-rsfpp\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.362218 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-combined-ca-bundle\") pod \"keystone-cron-29493601-rsfpp\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.362262 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbkhv\" (UniqueName: \"kubernetes.io/projected/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-kube-api-access-fbkhv\") pod \"keystone-cron-29493601-rsfpp\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.362336 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-config-data\") pod \"keystone-cron-29493601-rsfpp\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.369469 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-combined-ca-bundle\") pod \"keystone-cron-29493601-rsfpp\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.369661 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-fernet-keys\") pod \"keystone-cron-29493601-rsfpp\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.373658 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-config-data\") pod \"keystone-cron-29493601-rsfpp\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.378098 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbkhv\" (UniqueName: \"kubernetes.io/projected/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-kube-api-access-fbkhv\") pod \"keystone-cron-29493601-rsfpp\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.478624 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:00 crc kubenswrapper[4981]: I0128 16:01:00.925249 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29493601-rsfpp"] Jan 28 16:01:01 crc kubenswrapper[4981]: I0128 16:01:01.834460 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493601-rsfpp" event={"ID":"46fad62e-3ca9-4842-a2c7-0e0fd654d37f","Type":"ContainerStarted","Data":"96ac5d9f081a4aae0e22cbf3599f144ab256cd61a8d345c885b29edea6496737"} Jan 28 16:01:01 crc kubenswrapper[4981]: I0128 16:01:01.834770 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493601-rsfpp" event={"ID":"46fad62e-3ca9-4842-a2c7-0e0fd654d37f","Type":"ContainerStarted","Data":"7ce4d0ff4d96137f9c05cf8e10eabf8753ea580ed31b6e7ca1a1b78c88f4eb36"} Jan 28 16:01:01 crc kubenswrapper[4981]: I0128 16:01:01.850655 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29493601-rsfpp" podStartSLOduration=1.85060856 podStartE2EDuration="1.85060856s" podCreationTimestamp="2026-01-28 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:01:01.848023981 +0000 UTC m=+3473.300182222" watchObservedRunningTime="2026-01-28 16:01:01.85060856 +0000 UTC m=+3473.302766811" Jan 28 16:01:03 crc kubenswrapper[4981]: I0128 16:01:03.853286 4981 generic.go:334] "Generic (PLEG): container finished" podID="46fad62e-3ca9-4842-a2c7-0e0fd654d37f" containerID="96ac5d9f081a4aae0e22cbf3599f144ab256cd61a8d345c885b29edea6496737" exitCode=0 Jan 28 16:01:03 crc kubenswrapper[4981]: I0128 16:01:03.853387 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493601-rsfpp" event={"ID":"46fad62e-3ca9-4842-a2c7-0e0fd654d37f","Type":"ContainerDied","Data":"96ac5d9f081a4aae0e22cbf3599f144ab256cd61a8d345c885b29edea6496737"} Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.305818 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.455912 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-combined-ca-bundle\") pod \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.456043 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbkhv\" (UniqueName: \"kubernetes.io/projected/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-kube-api-access-fbkhv\") pod \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.456069 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-config-data\") pod \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.456178 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-fernet-keys\") pod \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\" (UID: \"46fad62e-3ca9-4842-a2c7-0e0fd654d37f\") " Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.462491 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-kube-api-access-fbkhv" (OuterVolumeSpecName: "kube-api-access-fbkhv") pod "46fad62e-3ca9-4842-a2c7-0e0fd654d37f" (UID: "46fad62e-3ca9-4842-a2c7-0e0fd654d37f"). InnerVolumeSpecName "kube-api-access-fbkhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.463350 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "46fad62e-3ca9-4842-a2c7-0e0fd654d37f" (UID: "46fad62e-3ca9-4842-a2c7-0e0fd654d37f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.488335 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46fad62e-3ca9-4842-a2c7-0e0fd654d37f" (UID: "46fad62e-3ca9-4842-a2c7-0e0fd654d37f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.506518 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-config-data" (OuterVolumeSpecName: "config-data") pod "46fad62e-3ca9-4842-a2c7-0e0fd654d37f" (UID: "46fad62e-3ca9-4842-a2c7-0e0fd654d37f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.558947 4981 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.558976 4981 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.559009 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbkhv\" (UniqueName: \"kubernetes.io/projected/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-kube-api-access-fbkhv\") on node \"crc\" DevicePath \"\"" Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.559018 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fad62e-3ca9-4842-a2c7-0e0fd654d37f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.874350 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493601-rsfpp" event={"ID":"46fad62e-3ca9-4842-a2c7-0e0fd654d37f","Type":"ContainerDied","Data":"7ce4d0ff4d96137f9c05cf8e10eabf8753ea580ed31b6e7ca1a1b78c88f4eb36"} Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.874691 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ce4d0ff4d96137f9c05cf8e10eabf8753ea580ed31b6e7ca1a1b78c88f4eb36" Jan 28 16:01:05 crc kubenswrapper[4981]: I0128 16:01:05.874420 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493601-rsfpp" Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.127417 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j7xxw"] Jan 28 16:01:51 crc kubenswrapper[4981]: E0128 16:01:51.128683 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fad62e-3ca9-4842-a2c7-0e0fd654d37f" containerName="keystone-cron" Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.128700 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fad62e-3ca9-4842-a2c7-0e0fd654d37f" containerName="keystone-cron" Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.128931 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fad62e-3ca9-4842-a2c7-0e0fd654d37f" containerName="keystone-cron" Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.130303 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.141416 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7xxw"] Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.289389 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94dc9742-a459-4f19-9352-8ecc8b881745-utilities\") pod \"redhat-marketplace-j7xxw\" (UID: \"94dc9742-a459-4f19-9352-8ecc8b881745\") " pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.289752 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94dc9742-a459-4f19-9352-8ecc8b881745-catalog-content\") pod \"redhat-marketplace-j7xxw\" (UID: \"94dc9742-a459-4f19-9352-8ecc8b881745\") " pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.290040 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj9dl\" (UniqueName: \"kubernetes.io/projected/94dc9742-a459-4f19-9352-8ecc8b881745-kube-api-access-gj9dl\") pod \"redhat-marketplace-j7xxw\" (UID: \"94dc9742-a459-4f19-9352-8ecc8b881745\") " pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.392063 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj9dl\" (UniqueName: \"kubernetes.io/projected/94dc9742-a459-4f19-9352-8ecc8b881745-kube-api-access-gj9dl\") pod \"redhat-marketplace-j7xxw\" (UID: \"94dc9742-a459-4f19-9352-8ecc8b881745\") " pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.392167 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94dc9742-a459-4f19-9352-8ecc8b881745-utilities\") pod \"redhat-marketplace-j7xxw\" (UID: \"94dc9742-a459-4f19-9352-8ecc8b881745\") " pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.392291 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94dc9742-a459-4f19-9352-8ecc8b881745-catalog-content\") pod \"redhat-marketplace-j7xxw\" (UID: \"94dc9742-a459-4f19-9352-8ecc8b881745\") " pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.392805 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94dc9742-a459-4f19-9352-8ecc8b881745-utilities\") pod \"redhat-marketplace-j7xxw\" (UID: \"94dc9742-a459-4f19-9352-8ecc8b881745\") " pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.392823 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94dc9742-a459-4f19-9352-8ecc8b881745-catalog-content\") pod \"redhat-marketplace-j7xxw\" (UID: \"94dc9742-a459-4f19-9352-8ecc8b881745\") " pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.411408 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj9dl\" (UniqueName: \"kubernetes.io/projected/94dc9742-a459-4f19-9352-8ecc8b881745-kube-api-access-gj9dl\") pod \"redhat-marketplace-j7xxw\" (UID: \"94dc9742-a459-4f19-9352-8ecc8b881745\") " pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.455259 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:01:51 crc kubenswrapper[4981]: I0128 16:01:51.957601 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7xxw"] Jan 28 16:01:52 crc kubenswrapper[4981]: I0128 16:01:52.383878 4981 generic.go:334] "Generic (PLEG): container finished" podID="94dc9742-a459-4f19-9352-8ecc8b881745" containerID="b287c2ba59836a2310703e3dee788ed02c47c1056b7c9bd7a45c4b442de2185a" exitCode=0 Jan 28 16:01:52 crc kubenswrapper[4981]: I0128 16:01:52.383986 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7xxw" event={"ID":"94dc9742-a459-4f19-9352-8ecc8b881745","Type":"ContainerDied","Data":"b287c2ba59836a2310703e3dee788ed02c47c1056b7c9bd7a45c4b442de2185a"} Jan 28 16:01:52 crc kubenswrapper[4981]: I0128 16:01:52.385039 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7xxw" event={"ID":"94dc9742-a459-4f19-9352-8ecc8b881745","Type":"ContainerStarted","Data":"4b7424a9bac464d3ded4f9859f0f7895e390d3d54135b91e9effea5b1b3ed075"} Jan 28 16:01:52 crc kubenswrapper[4981]: I0128 16:01:52.385877 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:01:53 crc kubenswrapper[4981]: I0128 16:01:53.396292 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7xxw" event={"ID":"94dc9742-a459-4f19-9352-8ecc8b881745","Type":"ContainerStarted","Data":"93755fae1cd267c932759db450630f51feba02393ef4dbe0a1d75a4c856a97c8"} Jan 28 16:01:54 crc kubenswrapper[4981]: I0128 16:01:54.407674 4981 generic.go:334] "Generic (PLEG): container finished" podID="94dc9742-a459-4f19-9352-8ecc8b881745" containerID="93755fae1cd267c932759db450630f51feba02393ef4dbe0a1d75a4c856a97c8" exitCode=0 Jan 28 16:01:54 crc kubenswrapper[4981]: I0128 16:01:54.407930 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7xxw" event={"ID":"94dc9742-a459-4f19-9352-8ecc8b881745","Type":"ContainerDied","Data":"93755fae1cd267c932759db450630f51feba02393ef4dbe0a1d75a4c856a97c8"} Jan 28 16:01:55 crc kubenswrapper[4981]: I0128 16:01:55.419809 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7xxw" event={"ID":"94dc9742-a459-4f19-9352-8ecc8b881745","Type":"ContainerStarted","Data":"75b6df48ccc1bfbf957b082276c917351d15bae59df4b87819547696d8ddf90b"} Jan 28 16:01:55 crc kubenswrapper[4981]: I0128 16:01:55.441438 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j7xxw" podStartSLOduration=1.9758486579999999 podStartE2EDuration="4.441418839s" podCreationTimestamp="2026-01-28 16:01:51 +0000 UTC" firstStartedPulling="2026-01-28 16:01:52.385599174 +0000 UTC m=+3523.837757425" lastFinishedPulling="2026-01-28 16:01:54.851169365 +0000 UTC m=+3526.303327606" observedRunningTime="2026-01-28 16:01:55.437291439 +0000 UTC m=+3526.889449690" watchObservedRunningTime="2026-01-28 16:01:55.441418839 +0000 UTC m=+3526.893577080" Jan 28 16:02:01 crc kubenswrapper[4981]: I0128 16:02:01.456309 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:02:01 crc kubenswrapper[4981]: I0128 16:02:01.457229 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:02:01 crc kubenswrapper[4981]: I0128 16:02:01.529204 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:02:01 crc kubenswrapper[4981]: I0128 16:02:01.582688 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:02:04 crc kubenswrapper[4981]: I0128 16:02:04.111994 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7xxw"] Jan 28 16:02:04 crc kubenswrapper[4981]: I0128 16:02:04.112529 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j7xxw" podUID="94dc9742-a459-4f19-9352-8ecc8b881745" containerName="registry-server" containerID="cri-o://75b6df48ccc1bfbf957b082276c917351d15bae59df4b87819547696d8ddf90b" gracePeriod=2 Jan 28 16:02:04 crc kubenswrapper[4981]: I0128 16:02:04.499512 4981 generic.go:334] "Generic (PLEG): container finished" podID="94dc9742-a459-4f19-9352-8ecc8b881745" containerID="75b6df48ccc1bfbf957b082276c917351d15bae59df4b87819547696d8ddf90b" exitCode=0 Jan 28 16:02:04 crc kubenswrapper[4981]: I0128 16:02:04.499818 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7xxw" event={"ID":"94dc9742-a459-4f19-9352-8ecc8b881745","Type":"ContainerDied","Data":"75b6df48ccc1bfbf957b082276c917351d15bae59df4b87819547696d8ddf90b"} Jan 28 16:02:04 crc kubenswrapper[4981]: I0128 16:02:04.624089 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:02:04 crc kubenswrapper[4981]: I0128 16:02:04.755951 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94dc9742-a459-4f19-9352-8ecc8b881745-catalog-content\") pod \"94dc9742-a459-4f19-9352-8ecc8b881745\" (UID: \"94dc9742-a459-4f19-9352-8ecc8b881745\") " Jan 28 16:02:04 crc kubenswrapper[4981]: I0128 16:02:04.756065 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gj9dl\" (UniqueName: \"kubernetes.io/projected/94dc9742-a459-4f19-9352-8ecc8b881745-kube-api-access-gj9dl\") pod \"94dc9742-a459-4f19-9352-8ecc8b881745\" (UID: \"94dc9742-a459-4f19-9352-8ecc8b881745\") " Jan 28 16:02:04 crc kubenswrapper[4981]: I0128 16:02:04.756260 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94dc9742-a459-4f19-9352-8ecc8b881745-utilities\") pod \"94dc9742-a459-4f19-9352-8ecc8b881745\" (UID: \"94dc9742-a459-4f19-9352-8ecc8b881745\") " Jan 28 16:02:04 crc kubenswrapper[4981]: I0128 16:02:04.757150 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94dc9742-a459-4f19-9352-8ecc8b881745-utilities" (OuterVolumeSpecName: "utilities") pod "94dc9742-a459-4f19-9352-8ecc8b881745" (UID: "94dc9742-a459-4f19-9352-8ecc8b881745"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:02:04 crc kubenswrapper[4981]: I0128 16:02:04.768443 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94dc9742-a459-4f19-9352-8ecc8b881745-kube-api-access-gj9dl" (OuterVolumeSpecName: "kube-api-access-gj9dl") pod "94dc9742-a459-4f19-9352-8ecc8b881745" (UID: "94dc9742-a459-4f19-9352-8ecc8b881745"). InnerVolumeSpecName "kube-api-access-gj9dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:02:04 crc kubenswrapper[4981]: I0128 16:02:04.776411 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94dc9742-a459-4f19-9352-8ecc8b881745-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94dc9742-a459-4f19-9352-8ecc8b881745" (UID: "94dc9742-a459-4f19-9352-8ecc8b881745"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:02:04 crc kubenswrapper[4981]: I0128 16:02:04.857731 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94dc9742-a459-4f19-9352-8ecc8b881745-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:02:04 crc kubenswrapper[4981]: I0128 16:02:04.857775 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gj9dl\" (UniqueName: \"kubernetes.io/projected/94dc9742-a459-4f19-9352-8ecc8b881745-kube-api-access-gj9dl\") on node \"crc\" DevicePath \"\"" Jan 28 16:02:04 crc kubenswrapper[4981]: I0128 16:02:04.857789 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94dc9742-a459-4f19-9352-8ecc8b881745-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:02:05 crc kubenswrapper[4981]: I0128 16:02:05.516174 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j7xxw" event={"ID":"94dc9742-a459-4f19-9352-8ecc8b881745","Type":"ContainerDied","Data":"4b7424a9bac464d3ded4f9859f0f7895e390d3d54135b91e9effea5b1b3ed075"} Jan 28 16:02:05 crc kubenswrapper[4981]: I0128 16:02:05.516243 4981 scope.go:117] "RemoveContainer" containerID="75b6df48ccc1bfbf957b082276c917351d15bae59df4b87819547696d8ddf90b" Jan 28 16:02:05 crc kubenswrapper[4981]: I0128 16:02:05.516255 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j7xxw" Jan 28 16:02:05 crc kubenswrapper[4981]: I0128 16:02:05.556444 4981 scope.go:117] "RemoveContainer" containerID="93755fae1cd267c932759db450630f51feba02393ef4dbe0a1d75a4c856a97c8" Jan 28 16:02:05 crc kubenswrapper[4981]: I0128 16:02:05.570795 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7xxw"] Jan 28 16:02:05 crc kubenswrapper[4981]: I0128 16:02:05.582318 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j7xxw"] Jan 28 16:02:05 crc kubenswrapper[4981]: I0128 16:02:05.599995 4981 scope.go:117] "RemoveContainer" containerID="b287c2ba59836a2310703e3dee788ed02c47c1056b7c9bd7a45c4b442de2185a" Jan 28 16:02:07 crc kubenswrapper[4981]: I0128 16:02:07.333749 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94dc9742-a459-4f19-9352-8ecc8b881745" path="/var/lib/kubelet/pods/94dc9742-a459-4f19-9352-8ecc8b881745/volumes" Jan 28 16:02:35 crc kubenswrapper[4981]: E0128 16:02:35.609036 4981 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.291s" Jan 28 16:02:35 crc kubenswrapper[4981]: I0128 16:02:35.761900 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="bfb88da5-80c7-481b-89ba-2c5c08c258c0" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 28 16:02:50 crc kubenswrapper[4981]: I0128 16:02:49.897705 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:02:50 crc kubenswrapper[4981]: I0128 16:02:49.898400 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:02:59 crc kubenswrapper[4981]: I0128 16:02:59.017513 4981 generic.go:334] "Generic (PLEG): container finished" podID="c34b143a-0284-461d-a788-106a5f6dca6c" containerID="bc1080e0f851f577c791f267752187334b1531601ceabe4cd9250bc7f6e32822" exitCode=0 Jan 28 16:02:59 crc kubenswrapper[4981]: I0128 16:02:59.017738 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c34b143a-0284-461d-a788-106a5f6dca6c","Type":"ContainerDied","Data":"bc1080e0f851f577c791f267752187334b1531601ceabe4cd9250bc7f6e32822"} Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.384419 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.482139 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-ca-certs\") pod \"c34b143a-0284-461d-a788-106a5f6dca6c\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.482324 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"c34b143a-0284-461d-a788-106a5f6dca6c\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.482380 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s74k4\" (UniqueName: \"kubernetes.io/projected/c34b143a-0284-461d-a788-106a5f6dca6c-kube-api-access-s74k4\") pod \"c34b143a-0284-461d-a788-106a5f6dca6c\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.482413 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c34b143a-0284-461d-a788-106a5f6dca6c-test-operator-ephemeral-temporary\") pod \"c34b143a-0284-461d-a788-106a5f6dca6c\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.482530 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-ssh-key\") pod \"c34b143a-0284-461d-a788-106a5f6dca6c\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.482554 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c34b143a-0284-461d-a788-106a5f6dca6c-config-data\") pod \"c34b143a-0284-461d-a788-106a5f6dca6c\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.482573 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c34b143a-0284-461d-a788-106a5f6dca6c-test-operator-ephemeral-workdir\") pod \"c34b143a-0284-461d-a788-106a5f6dca6c\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.482592 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-openstack-config-secret\") pod \"c34b143a-0284-461d-a788-106a5f6dca6c\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.482691 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c34b143a-0284-461d-a788-106a5f6dca6c-openstack-config\") pod \"c34b143a-0284-461d-a788-106a5f6dca6c\" (UID: \"c34b143a-0284-461d-a788-106a5f6dca6c\") " Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.483491 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34b143a-0284-461d-a788-106a5f6dca6c-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "c34b143a-0284-461d-a788-106a5f6dca6c" (UID: "c34b143a-0284-461d-a788-106a5f6dca6c"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.484262 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c34b143a-0284-461d-a788-106a5f6dca6c-config-data" (OuterVolumeSpecName: "config-data") pod "c34b143a-0284-461d-a788-106a5f6dca6c" (UID: "c34b143a-0284-461d-a788-106a5f6dca6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.487667 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c34b143a-0284-461d-a788-106a5f6dca6c-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "c34b143a-0284-461d-a788-106a5f6dca6c" (UID: "c34b143a-0284-461d-a788-106a5f6dca6c"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.487826 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "c34b143a-0284-461d-a788-106a5f6dca6c" (UID: "c34b143a-0284-461d-a788-106a5f6dca6c"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.489394 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c34b143a-0284-461d-a788-106a5f6dca6c-kube-api-access-s74k4" (OuterVolumeSpecName: "kube-api-access-s74k4") pod "c34b143a-0284-461d-a788-106a5f6dca6c" (UID: "c34b143a-0284-461d-a788-106a5f6dca6c"). InnerVolumeSpecName "kube-api-access-s74k4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.508933 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "c34b143a-0284-461d-a788-106a5f6dca6c" (UID: "c34b143a-0284-461d-a788-106a5f6dca6c"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.513732 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c34b143a-0284-461d-a788-106a5f6dca6c" (UID: "c34b143a-0284-461d-a788-106a5f6dca6c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.515348 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "c34b143a-0284-461d-a788-106a5f6dca6c" (UID: "c34b143a-0284-461d-a788-106a5f6dca6c"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.530128 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c34b143a-0284-461d-a788-106a5f6dca6c-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "c34b143a-0284-461d-a788-106a5f6dca6c" (UID: "c34b143a-0284-461d-a788-106a5f6dca6c"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.585052 4981 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.585657 4981 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.585737 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s74k4\" (UniqueName: \"kubernetes.io/projected/c34b143a-0284-461d-a788-106a5f6dca6c-kube-api-access-s74k4\") on node \"crc\" DevicePath \"\"" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.585809 4981 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/c34b143a-0284-461d-a788-106a5f6dca6c-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.585877 4981 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.585949 4981 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c34b143a-0284-461d-a788-106a5f6dca6c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.586015 4981 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/c34b143a-0284-461d-a788-106a5f6dca6c-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.586081 4981 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/c34b143a-0284-461d-a788-106a5f6dca6c-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.586143 4981 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/c34b143a-0284-461d-a788-106a5f6dca6c-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.603844 4981 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 28 16:03:00 crc kubenswrapper[4981]: I0128 16:03:00.687801 4981 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 28 16:03:01 crc kubenswrapper[4981]: I0128 16:03:01.039432 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"c34b143a-0284-461d-a788-106a5f6dca6c","Type":"ContainerDied","Data":"24930454d4256a90f8577289750f63ed4c21f31a8491af4fa0a57f630deaded2"} Jan 28 16:03:01 crc kubenswrapper[4981]: I0128 16:03:01.039469 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24930454d4256a90f8577289750f63ed4c21f31a8491af4fa0a57f630deaded2" Jan 28 16:03:01 crc kubenswrapper[4981]: I0128 16:03:01.039512 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 16:03:10 crc kubenswrapper[4981]: I0128 16:03:10.912975 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 16:03:10 crc kubenswrapper[4981]: E0128 16:03:10.914029 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c34b143a-0284-461d-a788-106a5f6dca6c" containerName="tempest-tests-tempest-tests-runner" Jan 28 16:03:10 crc kubenswrapper[4981]: I0128 16:03:10.914044 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="c34b143a-0284-461d-a788-106a5f6dca6c" containerName="tempest-tests-tempest-tests-runner" Jan 28 16:03:10 crc kubenswrapper[4981]: E0128 16:03:10.914052 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94dc9742-a459-4f19-9352-8ecc8b881745" containerName="registry-server" Jan 28 16:03:10 crc kubenswrapper[4981]: I0128 16:03:10.914059 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="94dc9742-a459-4f19-9352-8ecc8b881745" containerName="registry-server" Jan 28 16:03:10 crc kubenswrapper[4981]: E0128 16:03:10.914069 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94dc9742-a459-4f19-9352-8ecc8b881745" containerName="extract-utilities" Jan 28 16:03:10 crc kubenswrapper[4981]: I0128 16:03:10.914075 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="94dc9742-a459-4f19-9352-8ecc8b881745" containerName="extract-utilities" Jan 28 16:03:10 crc kubenswrapper[4981]: E0128 16:03:10.914083 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94dc9742-a459-4f19-9352-8ecc8b881745" containerName="extract-content" Jan 28 16:03:10 crc kubenswrapper[4981]: I0128 16:03:10.914088 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="94dc9742-a459-4f19-9352-8ecc8b881745" containerName="extract-content" Jan 28 16:03:10 crc kubenswrapper[4981]: I0128 16:03:10.914338 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="94dc9742-a459-4f19-9352-8ecc8b881745" containerName="registry-server" Jan 28 16:03:10 crc kubenswrapper[4981]: I0128 16:03:10.914361 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="c34b143a-0284-461d-a788-106a5f6dca6c" containerName="tempest-tests-tempest-tests-runner" Jan 28 16:03:10 crc kubenswrapper[4981]: I0128 16:03:10.915030 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 16:03:10 crc kubenswrapper[4981]: I0128 16:03:10.920678 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-kzkdq" Jan 28 16:03:10 crc kubenswrapper[4981]: I0128 16:03:10.948136 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 16:03:11 crc kubenswrapper[4981]: I0128 16:03:11.027948 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9d341b2-c188-4cb6-a39a-0313e67fac6e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 16:03:11 crc kubenswrapper[4981]: I0128 16:03:11.028098 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s4gj\" (UniqueName: \"kubernetes.io/projected/b9d341b2-c188-4cb6-a39a-0313e67fac6e-kube-api-access-7s4gj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9d341b2-c188-4cb6-a39a-0313e67fac6e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 16:03:11 crc kubenswrapper[4981]: I0128 16:03:11.130532 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s4gj\" (UniqueName: \"kubernetes.io/projected/b9d341b2-c188-4cb6-a39a-0313e67fac6e-kube-api-access-7s4gj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9d341b2-c188-4cb6-a39a-0313e67fac6e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 16:03:11 crc kubenswrapper[4981]: I0128 16:03:11.130820 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9d341b2-c188-4cb6-a39a-0313e67fac6e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 16:03:11 crc kubenswrapper[4981]: I0128 16:03:11.131223 4981 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9d341b2-c188-4cb6-a39a-0313e67fac6e\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 16:03:11 crc kubenswrapper[4981]: I0128 16:03:11.167450 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s4gj\" (UniqueName: \"kubernetes.io/projected/b9d341b2-c188-4cb6-a39a-0313e67fac6e-kube-api-access-7s4gj\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9d341b2-c188-4cb6-a39a-0313e67fac6e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 16:03:11 crc kubenswrapper[4981]: I0128 16:03:11.175555 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"b9d341b2-c188-4cb6-a39a-0313e67fac6e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 16:03:11 crc kubenswrapper[4981]: I0128 16:03:11.244343 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 16:03:11 crc kubenswrapper[4981]: I0128 16:03:11.706897 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 16:03:12 crc kubenswrapper[4981]: I0128 16:03:12.157069 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b9d341b2-c188-4cb6-a39a-0313e67fac6e","Type":"ContainerStarted","Data":"1c4ffce7e5eb1a53d9842fa570aede6a001872efcf88997f75259821ca4709a3"} Jan 28 16:03:14 crc kubenswrapper[4981]: I0128 16:03:14.182280 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"b9d341b2-c188-4cb6-a39a-0313e67fac6e","Type":"ContainerStarted","Data":"ad341c32638c99374555063f2d25223ac185374e4ac21ce42c34756686de987e"} Jan 28 16:03:14 crc kubenswrapper[4981]: I0128 16:03:14.211629 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.432986126 podStartE2EDuration="4.211607937s" podCreationTimestamp="2026-01-28 16:03:10 +0000 UTC" firstStartedPulling="2026-01-28 16:03:11.716305852 +0000 UTC m=+3603.168464103" lastFinishedPulling="2026-01-28 16:03:13.494927673 +0000 UTC m=+3604.947085914" observedRunningTime="2026-01-28 16:03:14.199072723 +0000 UTC m=+3605.651230984" watchObservedRunningTime="2026-01-28 16:03:14.211607937 +0000 UTC m=+3605.663766188" Jan 28 16:03:15 crc kubenswrapper[4981]: I0128 16:03:15.533974 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h959z"] Jan 28 16:03:15 crc kubenswrapper[4981]: I0128 16:03:15.538675 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:15 crc kubenswrapper[4981]: I0128 16:03:15.548266 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h959z"] Jan 28 16:03:15 crc kubenswrapper[4981]: I0128 16:03:15.628906 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6251ede4-323a-4dd7-9a04-3510d2560421-utilities\") pod \"community-operators-h959z\" (UID: \"6251ede4-323a-4dd7-9a04-3510d2560421\") " pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:15 crc kubenswrapper[4981]: I0128 16:03:15.629026 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjcd9\" (UniqueName: \"kubernetes.io/projected/6251ede4-323a-4dd7-9a04-3510d2560421-kube-api-access-pjcd9\") pod \"community-operators-h959z\" (UID: \"6251ede4-323a-4dd7-9a04-3510d2560421\") " pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:15 crc kubenswrapper[4981]: I0128 16:03:15.629145 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6251ede4-323a-4dd7-9a04-3510d2560421-catalog-content\") pod \"community-operators-h959z\" (UID: \"6251ede4-323a-4dd7-9a04-3510d2560421\") " pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:15 crc kubenswrapper[4981]: I0128 16:03:15.730892 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjcd9\" (UniqueName: \"kubernetes.io/projected/6251ede4-323a-4dd7-9a04-3510d2560421-kube-api-access-pjcd9\") pod \"community-operators-h959z\" (UID: \"6251ede4-323a-4dd7-9a04-3510d2560421\") " pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:15 crc kubenswrapper[4981]: I0128 16:03:15.731001 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6251ede4-323a-4dd7-9a04-3510d2560421-catalog-content\") pod \"community-operators-h959z\" (UID: \"6251ede4-323a-4dd7-9a04-3510d2560421\") " pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:15 crc kubenswrapper[4981]: I0128 16:03:15.731134 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6251ede4-323a-4dd7-9a04-3510d2560421-utilities\") pod \"community-operators-h959z\" (UID: \"6251ede4-323a-4dd7-9a04-3510d2560421\") " pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:15 crc kubenswrapper[4981]: I0128 16:03:15.731782 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6251ede4-323a-4dd7-9a04-3510d2560421-utilities\") pod \"community-operators-h959z\" (UID: \"6251ede4-323a-4dd7-9a04-3510d2560421\") " pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:15 crc kubenswrapper[4981]: I0128 16:03:15.731844 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6251ede4-323a-4dd7-9a04-3510d2560421-catalog-content\") pod \"community-operators-h959z\" (UID: \"6251ede4-323a-4dd7-9a04-3510d2560421\") " pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:15 crc kubenswrapper[4981]: I0128 16:03:15.771568 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjcd9\" (UniqueName: \"kubernetes.io/projected/6251ede4-323a-4dd7-9a04-3510d2560421-kube-api-access-pjcd9\") pod \"community-operators-h959z\" (UID: \"6251ede4-323a-4dd7-9a04-3510d2560421\") " pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:15 crc kubenswrapper[4981]: I0128 16:03:15.876834 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:16 crc kubenswrapper[4981]: I0128 16:03:16.379717 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h959z"] Jan 28 16:03:16 crc kubenswrapper[4981]: W0128 16:03:16.385372 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6251ede4_323a_4dd7_9a04_3510d2560421.slice/crio-5553a76b43fe3784f83abc5ede1a91e49874c3b24b6e27f15fb3c69b74d6402f WatchSource:0}: Error finding container 5553a76b43fe3784f83abc5ede1a91e49874c3b24b6e27f15fb3c69b74d6402f: Status 404 returned error can't find the container with id 5553a76b43fe3784f83abc5ede1a91e49874c3b24b6e27f15fb3c69b74d6402f Jan 28 16:03:17 crc kubenswrapper[4981]: I0128 16:03:17.248277 4981 generic.go:334] "Generic (PLEG): container finished" podID="6251ede4-323a-4dd7-9a04-3510d2560421" containerID="e544bb3b9707ab89d0a85421db8f3ed374f30ea020e9767500271ef667440c0b" exitCode=0 Jan 28 16:03:17 crc kubenswrapper[4981]: I0128 16:03:17.248521 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h959z" event={"ID":"6251ede4-323a-4dd7-9a04-3510d2560421","Type":"ContainerDied","Data":"e544bb3b9707ab89d0a85421db8f3ed374f30ea020e9767500271ef667440c0b"} Jan 28 16:03:17 crc kubenswrapper[4981]: I0128 16:03:17.248551 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h959z" event={"ID":"6251ede4-323a-4dd7-9a04-3510d2560421","Type":"ContainerStarted","Data":"5553a76b43fe3784f83abc5ede1a91e49874c3b24b6e27f15fb3c69b74d6402f"} Jan 28 16:03:19 crc kubenswrapper[4981]: I0128 16:03:19.268698 4981 generic.go:334] "Generic (PLEG): container finished" podID="6251ede4-323a-4dd7-9a04-3510d2560421" containerID="0e0ca8b98dc6d9d107ea5618332094f71b3691eba27848afe2407c40092af88d" exitCode=0 Jan 28 16:03:19 crc kubenswrapper[4981]: I0128 16:03:19.268768 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h959z" event={"ID":"6251ede4-323a-4dd7-9a04-3510d2560421","Type":"ContainerDied","Data":"0e0ca8b98dc6d9d107ea5618332094f71b3691eba27848afe2407c40092af88d"} Jan 28 16:03:19 crc kubenswrapper[4981]: I0128 16:03:19.897358 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:03:19 crc kubenswrapper[4981]: I0128 16:03:19.897453 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:03:21 crc kubenswrapper[4981]: I0128 16:03:21.291785 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h959z" event={"ID":"6251ede4-323a-4dd7-9a04-3510d2560421","Type":"ContainerStarted","Data":"e987a8646cee807dbb43a5a48b6b2c33bdad57a9bdb5b18e2db3d7712c9fd599"} Jan 28 16:03:21 crc kubenswrapper[4981]: I0128 16:03:21.317164 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h959z" podStartSLOduration=2.998341565 podStartE2EDuration="6.31714182s" podCreationTimestamp="2026-01-28 16:03:15 +0000 UTC" firstStartedPulling="2026-01-28 16:03:17.2500997 +0000 UTC m=+3608.702257941" lastFinishedPulling="2026-01-28 16:03:20.568899955 +0000 UTC m=+3612.021058196" observedRunningTime="2026-01-28 16:03:21.307585505 +0000 UTC m=+3612.759743746" watchObservedRunningTime="2026-01-28 16:03:21.31714182 +0000 UTC m=+3612.769300071" Jan 28 16:03:25 crc kubenswrapper[4981]: I0128 16:03:25.877810 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:25 crc kubenswrapper[4981]: I0128 16:03:25.878317 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:25 crc kubenswrapper[4981]: I0128 16:03:25.954395 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:26 crc kubenswrapper[4981]: I0128 16:03:26.409011 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:26 crc kubenswrapper[4981]: I0128 16:03:26.487451 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h959z"] Jan 28 16:03:28 crc kubenswrapper[4981]: I0128 16:03:28.364809 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h959z" podUID="6251ede4-323a-4dd7-9a04-3510d2560421" containerName="registry-server" containerID="cri-o://e987a8646cee807dbb43a5a48b6b2c33bdad57a9bdb5b18e2db3d7712c9fd599" gracePeriod=2 Jan 28 16:03:28 crc kubenswrapper[4981]: I0128 16:03:28.901768 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:28 crc kubenswrapper[4981]: I0128 16:03:28.992318 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6251ede4-323a-4dd7-9a04-3510d2560421-utilities\") pod \"6251ede4-323a-4dd7-9a04-3510d2560421\" (UID: \"6251ede4-323a-4dd7-9a04-3510d2560421\") " Jan 28 16:03:28 crc kubenswrapper[4981]: I0128 16:03:28.992446 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6251ede4-323a-4dd7-9a04-3510d2560421-catalog-content\") pod \"6251ede4-323a-4dd7-9a04-3510d2560421\" (UID: \"6251ede4-323a-4dd7-9a04-3510d2560421\") " Jan 28 16:03:28 crc kubenswrapper[4981]: I0128 16:03:28.992553 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjcd9\" (UniqueName: \"kubernetes.io/projected/6251ede4-323a-4dd7-9a04-3510d2560421-kube-api-access-pjcd9\") pod \"6251ede4-323a-4dd7-9a04-3510d2560421\" (UID: \"6251ede4-323a-4dd7-9a04-3510d2560421\") " Jan 28 16:03:28 crc kubenswrapper[4981]: I0128 16:03:28.993556 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6251ede4-323a-4dd7-9a04-3510d2560421-utilities" (OuterVolumeSpecName: "utilities") pod "6251ede4-323a-4dd7-9a04-3510d2560421" (UID: "6251ede4-323a-4dd7-9a04-3510d2560421"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:03:28 crc kubenswrapper[4981]: I0128 16:03:28.999108 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6251ede4-323a-4dd7-9a04-3510d2560421-kube-api-access-pjcd9" (OuterVolumeSpecName: "kube-api-access-pjcd9") pod "6251ede4-323a-4dd7-9a04-3510d2560421" (UID: "6251ede4-323a-4dd7-9a04-3510d2560421"). InnerVolumeSpecName "kube-api-access-pjcd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.057893 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6251ede4-323a-4dd7-9a04-3510d2560421-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6251ede4-323a-4dd7-9a04-3510d2560421" (UID: "6251ede4-323a-4dd7-9a04-3510d2560421"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.095562 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6251ede4-323a-4dd7-9a04-3510d2560421-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.095620 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6251ede4-323a-4dd7-9a04-3510d2560421-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.095638 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjcd9\" (UniqueName: \"kubernetes.io/projected/6251ede4-323a-4dd7-9a04-3510d2560421-kube-api-access-pjcd9\") on node \"crc\" DevicePath \"\"" Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.375898 4981 generic.go:334] "Generic (PLEG): container finished" podID="6251ede4-323a-4dd7-9a04-3510d2560421" containerID="e987a8646cee807dbb43a5a48b6b2c33bdad57a9bdb5b18e2db3d7712c9fd599" exitCode=0 Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.375946 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h959z" event={"ID":"6251ede4-323a-4dd7-9a04-3510d2560421","Type":"ContainerDied","Data":"e987a8646cee807dbb43a5a48b6b2c33bdad57a9bdb5b18e2db3d7712c9fd599"} Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.375999 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h959z" event={"ID":"6251ede4-323a-4dd7-9a04-3510d2560421","Type":"ContainerDied","Data":"5553a76b43fe3784f83abc5ede1a91e49874c3b24b6e27f15fb3c69b74d6402f"} Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.376029 4981 scope.go:117] "RemoveContainer" containerID="e987a8646cee807dbb43a5a48b6b2c33bdad57a9bdb5b18e2db3d7712c9fd599" Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.375965 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h959z" Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.402314 4981 scope.go:117] "RemoveContainer" containerID="0e0ca8b98dc6d9d107ea5618332094f71b3691eba27848afe2407c40092af88d" Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.407346 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h959z"] Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.418081 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h959z"] Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.424255 4981 scope.go:117] "RemoveContainer" containerID="e544bb3b9707ab89d0a85421db8f3ed374f30ea020e9767500271ef667440c0b" Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.489415 4981 scope.go:117] "RemoveContainer" containerID="e987a8646cee807dbb43a5a48b6b2c33bdad57a9bdb5b18e2db3d7712c9fd599" Jan 28 16:03:29 crc kubenswrapper[4981]: E0128 16:03:29.489865 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e987a8646cee807dbb43a5a48b6b2c33bdad57a9bdb5b18e2db3d7712c9fd599\": container with ID starting with e987a8646cee807dbb43a5a48b6b2c33bdad57a9bdb5b18e2db3d7712c9fd599 not found: ID does not exist" containerID="e987a8646cee807dbb43a5a48b6b2c33bdad57a9bdb5b18e2db3d7712c9fd599" Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.489922 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e987a8646cee807dbb43a5a48b6b2c33bdad57a9bdb5b18e2db3d7712c9fd599"} err="failed to get container status \"e987a8646cee807dbb43a5a48b6b2c33bdad57a9bdb5b18e2db3d7712c9fd599\": rpc error: code = NotFound desc = could not find container \"e987a8646cee807dbb43a5a48b6b2c33bdad57a9bdb5b18e2db3d7712c9fd599\": container with ID starting with e987a8646cee807dbb43a5a48b6b2c33bdad57a9bdb5b18e2db3d7712c9fd599 not found: ID does not exist" Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.489958 4981 scope.go:117] "RemoveContainer" containerID="0e0ca8b98dc6d9d107ea5618332094f71b3691eba27848afe2407c40092af88d" Jan 28 16:03:29 crc kubenswrapper[4981]: E0128 16:03:29.490301 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e0ca8b98dc6d9d107ea5618332094f71b3691eba27848afe2407c40092af88d\": container with ID starting with 0e0ca8b98dc6d9d107ea5618332094f71b3691eba27848afe2407c40092af88d not found: ID does not exist" containerID="0e0ca8b98dc6d9d107ea5618332094f71b3691eba27848afe2407c40092af88d" Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.490336 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e0ca8b98dc6d9d107ea5618332094f71b3691eba27848afe2407c40092af88d"} err="failed to get container status \"0e0ca8b98dc6d9d107ea5618332094f71b3691eba27848afe2407c40092af88d\": rpc error: code = NotFound desc = could not find container \"0e0ca8b98dc6d9d107ea5618332094f71b3691eba27848afe2407c40092af88d\": container with ID starting with 0e0ca8b98dc6d9d107ea5618332094f71b3691eba27848afe2407c40092af88d not found: ID does not exist" Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.490354 4981 scope.go:117] "RemoveContainer" containerID="e544bb3b9707ab89d0a85421db8f3ed374f30ea020e9767500271ef667440c0b" Jan 28 16:03:29 crc kubenswrapper[4981]: E0128 16:03:29.490637 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e544bb3b9707ab89d0a85421db8f3ed374f30ea020e9767500271ef667440c0b\": container with ID starting with e544bb3b9707ab89d0a85421db8f3ed374f30ea020e9767500271ef667440c0b not found: ID does not exist" containerID="e544bb3b9707ab89d0a85421db8f3ed374f30ea020e9767500271ef667440c0b" Jan 28 16:03:29 crc kubenswrapper[4981]: I0128 16:03:29.490668 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e544bb3b9707ab89d0a85421db8f3ed374f30ea020e9767500271ef667440c0b"} err="failed to get container status \"e544bb3b9707ab89d0a85421db8f3ed374f30ea020e9767500271ef667440c0b\": rpc error: code = NotFound desc = could not find container \"e544bb3b9707ab89d0a85421db8f3ed374f30ea020e9767500271ef667440c0b\": container with ID starting with e544bb3b9707ab89d0a85421db8f3ed374f30ea020e9767500271ef667440c0b not found: ID does not exist" Jan 28 16:03:31 crc kubenswrapper[4981]: I0128 16:03:31.339104 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6251ede4-323a-4dd7-9a04-3510d2560421" path="/var/lib/kubelet/pods/6251ede4-323a-4dd7-9a04-3510d2560421/volumes" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.417860 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dk42d/must-gather-m4plm"] Jan 28 16:03:35 crc kubenswrapper[4981]: E0128 16:03:35.419155 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6251ede4-323a-4dd7-9a04-3510d2560421" containerName="registry-server" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.419253 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6251ede4-323a-4dd7-9a04-3510d2560421" containerName="registry-server" Jan 28 16:03:35 crc kubenswrapper[4981]: E0128 16:03:35.419325 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6251ede4-323a-4dd7-9a04-3510d2560421" containerName="extract-content" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.419385 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6251ede4-323a-4dd7-9a04-3510d2560421" containerName="extract-content" Jan 28 16:03:35 crc kubenswrapper[4981]: E0128 16:03:35.419445 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6251ede4-323a-4dd7-9a04-3510d2560421" containerName="extract-utilities" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.419498 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="6251ede4-323a-4dd7-9a04-3510d2560421" containerName="extract-utilities" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.419732 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="6251ede4-323a-4dd7-9a04-3510d2560421" containerName="registry-server" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.420707 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/must-gather-m4plm" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.423944 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dk42d"/"openshift-service-ca.crt" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.424456 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-dk42d"/"default-dockercfg-vfbrw" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.425102 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-dk42d"/"kube-root-ca.crt" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.438052 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dk42d/must-gather-m4plm"] Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.512200 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n9gk\" (UniqueName: \"kubernetes.io/projected/442612b4-927a-47bb-b48a-29a6ab80e0bb-kube-api-access-4n9gk\") pod \"must-gather-m4plm\" (UID: \"442612b4-927a-47bb-b48a-29a6ab80e0bb\") " pod="openshift-must-gather-dk42d/must-gather-m4plm" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.512501 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/442612b4-927a-47bb-b48a-29a6ab80e0bb-must-gather-output\") pod \"must-gather-m4plm\" (UID: \"442612b4-927a-47bb-b48a-29a6ab80e0bb\") " pod="openshift-must-gather-dk42d/must-gather-m4plm" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.614367 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n9gk\" (UniqueName: \"kubernetes.io/projected/442612b4-927a-47bb-b48a-29a6ab80e0bb-kube-api-access-4n9gk\") pod \"must-gather-m4plm\" (UID: \"442612b4-927a-47bb-b48a-29a6ab80e0bb\") " pod="openshift-must-gather-dk42d/must-gather-m4plm" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.614435 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/442612b4-927a-47bb-b48a-29a6ab80e0bb-must-gather-output\") pod \"must-gather-m4plm\" (UID: \"442612b4-927a-47bb-b48a-29a6ab80e0bb\") " pod="openshift-must-gather-dk42d/must-gather-m4plm" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.614896 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/442612b4-927a-47bb-b48a-29a6ab80e0bb-must-gather-output\") pod \"must-gather-m4plm\" (UID: \"442612b4-927a-47bb-b48a-29a6ab80e0bb\") " pod="openshift-must-gather-dk42d/must-gather-m4plm" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.639684 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n9gk\" (UniqueName: \"kubernetes.io/projected/442612b4-927a-47bb-b48a-29a6ab80e0bb-kube-api-access-4n9gk\") pod \"must-gather-m4plm\" (UID: \"442612b4-927a-47bb-b48a-29a6ab80e0bb\") " pod="openshift-must-gather-dk42d/must-gather-m4plm" Jan 28 16:03:35 crc kubenswrapper[4981]: I0128 16:03:35.745839 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/must-gather-m4plm" Jan 28 16:03:36 crc kubenswrapper[4981]: I0128 16:03:36.211878 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dk42d/must-gather-m4plm"] Jan 28 16:03:36 crc kubenswrapper[4981]: I0128 16:03:36.458728 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dk42d/must-gather-m4plm" event={"ID":"442612b4-927a-47bb-b48a-29a6ab80e0bb","Type":"ContainerStarted","Data":"1a5682fab2808508f62d2868b07636121904b3889311ef53c6163248e8843b57"} Jan 28 16:03:45 crc kubenswrapper[4981]: I0128 16:03:45.586945 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dk42d/must-gather-m4plm" event={"ID":"442612b4-927a-47bb-b48a-29a6ab80e0bb","Type":"ContainerStarted","Data":"41af3999fd65bddee1fb67239f7575be1d415e3aab47ae8ebbeda4795b2fea03"} Jan 28 16:03:45 crc kubenswrapper[4981]: I0128 16:03:45.587467 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dk42d/must-gather-m4plm" event={"ID":"442612b4-927a-47bb-b48a-29a6ab80e0bb","Type":"ContainerStarted","Data":"bf210db8c873ad8c599232d1fd654bbd2b50855834163fcf082ff1b75ce942aa"} Jan 28 16:03:45 crc kubenswrapper[4981]: I0128 16:03:45.611897 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dk42d/must-gather-m4plm" podStartSLOduration=2.123484287 podStartE2EDuration="10.61187262s" podCreationTimestamp="2026-01-28 16:03:35 +0000 UTC" firstStartedPulling="2026-01-28 16:03:36.224496994 +0000 UTC m=+3627.676655235" lastFinishedPulling="2026-01-28 16:03:44.712885327 +0000 UTC m=+3636.165043568" observedRunningTime="2026-01-28 16:03:45.608075289 +0000 UTC m=+3637.060233530" watchObservedRunningTime="2026-01-28 16:03:45.61187262 +0000 UTC m=+3637.064030861" Jan 28 16:03:48 crc kubenswrapper[4981]: E0128 16:03:48.078647 4981 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.151:51930->38.102.83.151:33457: read tcp 38.102.83.151:51930->38.102.83.151:33457: read: connection reset by peer Jan 28 16:03:48 crc kubenswrapper[4981]: E0128 16:03:48.078657 4981 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.151:51930->38.102.83.151:33457: write tcp 38.102.83.151:51930->38.102.83.151:33457: write: broken pipe Jan 28 16:03:48 crc kubenswrapper[4981]: I0128 16:03:48.803480 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dk42d/crc-debug-4kq2h"] Jan 28 16:03:48 crc kubenswrapper[4981]: I0128 16:03:48.804887 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/crc-debug-4kq2h" Jan 28 16:03:48 crc kubenswrapper[4981]: I0128 16:03:48.908400 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhhb4\" (UniqueName: \"kubernetes.io/projected/dd7f5f54-06dd-4044-92ff-20df135afdfd-kube-api-access-lhhb4\") pod \"crc-debug-4kq2h\" (UID: \"dd7f5f54-06dd-4044-92ff-20df135afdfd\") " pod="openshift-must-gather-dk42d/crc-debug-4kq2h" Jan 28 16:03:48 crc kubenswrapper[4981]: I0128 16:03:48.909048 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd7f5f54-06dd-4044-92ff-20df135afdfd-host\") pod \"crc-debug-4kq2h\" (UID: \"dd7f5f54-06dd-4044-92ff-20df135afdfd\") " pod="openshift-must-gather-dk42d/crc-debug-4kq2h" Jan 28 16:03:49 crc kubenswrapper[4981]: I0128 16:03:49.010593 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd7f5f54-06dd-4044-92ff-20df135afdfd-host\") pod \"crc-debug-4kq2h\" (UID: \"dd7f5f54-06dd-4044-92ff-20df135afdfd\") " pod="openshift-must-gather-dk42d/crc-debug-4kq2h" Jan 28 16:03:49 crc kubenswrapper[4981]: I0128 16:03:49.010658 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhhb4\" (UniqueName: \"kubernetes.io/projected/dd7f5f54-06dd-4044-92ff-20df135afdfd-kube-api-access-lhhb4\") pod \"crc-debug-4kq2h\" (UID: \"dd7f5f54-06dd-4044-92ff-20df135afdfd\") " pod="openshift-must-gather-dk42d/crc-debug-4kq2h" Jan 28 16:03:49 crc kubenswrapper[4981]: I0128 16:03:49.011073 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd7f5f54-06dd-4044-92ff-20df135afdfd-host\") pod \"crc-debug-4kq2h\" (UID: \"dd7f5f54-06dd-4044-92ff-20df135afdfd\") " pod="openshift-must-gather-dk42d/crc-debug-4kq2h" Jan 28 16:03:49 crc kubenswrapper[4981]: I0128 16:03:49.029470 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhhb4\" (UniqueName: \"kubernetes.io/projected/dd7f5f54-06dd-4044-92ff-20df135afdfd-kube-api-access-lhhb4\") pod \"crc-debug-4kq2h\" (UID: \"dd7f5f54-06dd-4044-92ff-20df135afdfd\") " pod="openshift-must-gather-dk42d/crc-debug-4kq2h" Jan 28 16:03:49 crc kubenswrapper[4981]: I0128 16:03:49.129903 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/crc-debug-4kq2h" Jan 28 16:03:49 crc kubenswrapper[4981]: I0128 16:03:49.643247 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dk42d/crc-debug-4kq2h" event={"ID":"dd7f5f54-06dd-4044-92ff-20df135afdfd","Type":"ContainerStarted","Data":"8c9483c780356820ce8ff46cdbf59466b8f91fd8c7fbbfef4e1c633a9cc8d0e0"} Jan 28 16:03:50 crc kubenswrapper[4981]: I0128 16:03:50.045240 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:03:50 crc kubenswrapper[4981]: I0128 16:03:50.045340 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:03:50 crc kubenswrapper[4981]: I0128 16:03:50.045498 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 16:03:50 crc kubenswrapper[4981]: I0128 16:03:50.046559 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"39e0a59a118d98c8222e77fa5717ab9ada8940cd17d88d2684c70c105b80474d"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:03:50 crc kubenswrapper[4981]: I0128 16:03:50.046646 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://39e0a59a118d98c8222e77fa5717ab9ada8940cd17d88d2684c70c105b80474d" gracePeriod=600 Jan 28 16:03:50 crc kubenswrapper[4981]: I0128 16:03:50.655901 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="39e0a59a118d98c8222e77fa5717ab9ada8940cd17d88d2684c70c105b80474d" exitCode=0 Jan 28 16:03:50 crc kubenswrapper[4981]: I0128 16:03:50.656234 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"39e0a59a118d98c8222e77fa5717ab9ada8940cd17d88d2684c70c105b80474d"} Jan 28 16:03:50 crc kubenswrapper[4981]: I0128 16:03:50.656260 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539"} Jan 28 16:03:50 crc kubenswrapper[4981]: I0128 16:03:50.656276 4981 scope.go:117] "RemoveContainer" containerID="507f109b1f65ff81f5ae7994e73e93379ff5e04aa200f90a413be478744464b8" Jan 28 16:04:07 crc kubenswrapper[4981]: E0128 16:04:07.924043 4981 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Jan 28 16:04:07 crc kubenswrapper[4981]: E0128 16:04:07.924839 4981 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lhhb4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-4kq2h_openshift-must-gather-dk42d(dd7f5f54-06dd-4044-92ff-20df135afdfd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 16:04:07 crc kubenswrapper[4981]: E0128 16:04:07.926840 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-dk42d/crc-debug-4kq2h" podUID="dd7f5f54-06dd-4044-92ff-20df135afdfd" Jan 28 16:04:08 crc kubenswrapper[4981]: E0128 16:04:08.843181 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-dk42d/crc-debug-4kq2h" podUID="dd7f5f54-06dd-4044-92ff-20df135afdfd" Jan 28 16:04:25 crc kubenswrapper[4981]: I0128 16:04:25.996286 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dk42d/crc-debug-4kq2h" event={"ID":"dd7f5f54-06dd-4044-92ff-20df135afdfd","Type":"ContainerStarted","Data":"e1fd90660541e7513e1c325588fbbf2ba7f6a7e98366f2d27054168ad21fafba"} Jan 28 16:04:26 crc kubenswrapper[4981]: I0128 16:04:26.028728 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dk42d/crc-debug-4kq2h" podStartSLOduration=2.385489975 podStartE2EDuration="38.028706577s" podCreationTimestamp="2026-01-28 16:03:48 +0000 UTC" firstStartedPulling="2026-01-28 16:03:49.163483551 +0000 UTC m=+3640.615641792" lastFinishedPulling="2026-01-28 16:04:24.806700153 +0000 UTC m=+3676.258858394" observedRunningTime="2026-01-28 16:04:26.019751288 +0000 UTC m=+3677.471909529" watchObservedRunningTime="2026-01-28 16:04:26.028706577 +0000 UTC m=+3677.480864848" Jan 28 16:05:10 crc kubenswrapper[4981]: I0128 16:05:10.450603 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n7fm8"] Jan 28 16:05:10 crc kubenswrapper[4981]: I0128 16:05:10.454787 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:10 crc kubenswrapper[4981]: I0128 16:05:10.473893 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n7fm8"] Jan 28 16:05:10 crc kubenswrapper[4981]: I0128 16:05:10.542595 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bz26\" (UniqueName: \"kubernetes.io/projected/1ae08581-1e63-411b-a6ab-a5d985571e69-kube-api-access-5bz26\") pod \"certified-operators-n7fm8\" (UID: \"1ae08581-1e63-411b-a6ab-a5d985571e69\") " pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:10 crc kubenswrapper[4981]: I0128 16:05:10.542679 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ae08581-1e63-411b-a6ab-a5d985571e69-utilities\") pod \"certified-operators-n7fm8\" (UID: \"1ae08581-1e63-411b-a6ab-a5d985571e69\") " pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:10 crc kubenswrapper[4981]: I0128 16:05:10.542729 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ae08581-1e63-411b-a6ab-a5d985571e69-catalog-content\") pod \"certified-operators-n7fm8\" (UID: \"1ae08581-1e63-411b-a6ab-a5d985571e69\") " pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:10 crc kubenswrapper[4981]: I0128 16:05:10.644374 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bz26\" (UniqueName: \"kubernetes.io/projected/1ae08581-1e63-411b-a6ab-a5d985571e69-kube-api-access-5bz26\") pod \"certified-operators-n7fm8\" (UID: \"1ae08581-1e63-411b-a6ab-a5d985571e69\") " pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:10 crc kubenswrapper[4981]: I0128 16:05:10.644548 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ae08581-1e63-411b-a6ab-a5d985571e69-utilities\") pod \"certified-operators-n7fm8\" (UID: \"1ae08581-1e63-411b-a6ab-a5d985571e69\") " pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:10 crc kubenswrapper[4981]: I0128 16:05:10.644591 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ae08581-1e63-411b-a6ab-a5d985571e69-catalog-content\") pod \"certified-operators-n7fm8\" (UID: \"1ae08581-1e63-411b-a6ab-a5d985571e69\") " pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:10 crc kubenswrapper[4981]: I0128 16:05:10.645212 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ae08581-1e63-411b-a6ab-a5d985571e69-utilities\") pod \"certified-operators-n7fm8\" (UID: \"1ae08581-1e63-411b-a6ab-a5d985571e69\") " pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:10 crc kubenswrapper[4981]: I0128 16:05:10.645325 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ae08581-1e63-411b-a6ab-a5d985571e69-catalog-content\") pod \"certified-operators-n7fm8\" (UID: \"1ae08581-1e63-411b-a6ab-a5d985571e69\") " pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:10 crc kubenswrapper[4981]: I0128 16:05:10.671128 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bz26\" (UniqueName: \"kubernetes.io/projected/1ae08581-1e63-411b-a6ab-a5d985571e69-kube-api-access-5bz26\") pod \"certified-operators-n7fm8\" (UID: \"1ae08581-1e63-411b-a6ab-a5d985571e69\") " pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:10 crc kubenswrapper[4981]: I0128 16:05:10.773324 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:11 crc kubenswrapper[4981]: I0128 16:05:11.231795 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n7fm8"] Jan 28 16:05:11 crc kubenswrapper[4981]: I0128 16:05:11.412246 4981 generic.go:334] "Generic (PLEG): container finished" podID="dd7f5f54-06dd-4044-92ff-20df135afdfd" containerID="e1fd90660541e7513e1c325588fbbf2ba7f6a7e98366f2d27054168ad21fafba" exitCode=0 Jan 28 16:05:11 crc kubenswrapper[4981]: I0128 16:05:11.412311 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dk42d/crc-debug-4kq2h" event={"ID":"dd7f5f54-06dd-4044-92ff-20df135afdfd","Type":"ContainerDied","Data":"e1fd90660541e7513e1c325588fbbf2ba7f6a7e98366f2d27054168ad21fafba"} Jan 28 16:05:11 crc kubenswrapper[4981]: I0128 16:05:11.413780 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7fm8" event={"ID":"1ae08581-1e63-411b-a6ab-a5d985571e69","Type":"ContainerStarted","Data":"1a228ecbacd6d071313a3c22f478f9c54e57d22cfb9769684240767539508bd5"} Jan 28 16:05:12 crc kubenswrapper[4981]: I0128 16:05:12.426661 4981 generic.go:334] "Generic (PLEG): container finished" podID="1ae08581-1e63-411b-a6ab-a5d985571e69" containerID="fbdd4fb84cf560a6a4630c4a7140e5b22b7489cb88f31f7d61988a50116cf096" exitCode=0 Jan 28 16:05:12 crc kubenswrapper[4981]: I0128 16:05:12.429459 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7fm8" event={"ID":"1ae08581-1e63-411b-a6ab-a5d985571e69","Type":"ContainerDied","Data":"fbdd4fb84cf560a6a4630c4a7140e5b22b7489cb88f31f7d61988a50116cf096"} Jan 28 16:05:12 crc kubenswrapper[4981]: I0128 16:05:12.533021 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/crc-debug-4kq2h" Jan 28 16:05:12 crc kubenswrapper[4981]: I0128 16:05:12.577725 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dk42d/crc-debug-4kq2h"] Jan 28 16:05:12 crc kubenswrapper[4981]: I0128 16:05:12.587902 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dk42d/crc-debug-4kq2h"] Jan 28 16:05:12 crc kubenswrapper[4981]: I0128 16:05:12.591761 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd7f5f54-06dd-4044-92ff-20df135afdfd-host\") pod \"dd7f5f54-06dd-4044-92ff-20df135afdfd\" (UID: \"dd7f5f54-06dd-4044-92ff-20df135afdfd\") " Jan 28 16:05:12 crc kubenswrapper[4981]: I0128 16:05:12.591938 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhhb4\" (UniqueName: \"kubernetes.io/projected/dd7f5f54-06dd-4044-92ff-20df135afdfd-kube-api-access-lhhb4\") pod \"dd7f5f54-06dd-4044-92ff-20df135afdfd\" (UID: \"dd7f5f54-06dd-4044-92ff-20df135afdfd\") " Jan 28 16:05:12 crc kubenswrapper[4981]: I0128 16:05:12.592060 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd7f5f54-06dd-4044-92ff-20df135afdfd-host" (OuterVolumeSpecName: "host") pod "dd7f5f54-06dd-4044-92ff-20df135afdfd" (UID: "dd7f5f54-06dd-4044-92ff-20df135afdfd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:05:12 crc kubenswrapper[4981]: I0128 16:05:12.592474 4981 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/dd7f5f54-06dd-4044-92ff-20df135afdfd-host\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:12 crc kubenswrapper[4981]: I0128 16:05:12.597724 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd7f5f54-06dd-4044-92ff-20df135afdfd-kube-api-access-lhhb4" (OuterVolumeSpecName: "kube-api-access-lhhb4") pod "dd7f5f54-06dd-4044-92ff-20df135afdfd" (UID: "dd7f5f54-06dd-4044-92ff-20df135afdfd"). InnerVolumeSpecName "kube-api-access-lhhb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:12 crc kubenswrapper[4981]: I0128 16:05:12.694062 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhhb4\" (UniqueName: \"kubernetes.io/projected/dd7f5f54-06dd-4044-92ff-20df135afdfd-kube-api-access-lhhb4\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:13 crc kubenswrapper[4981]: I0128 16:05:13.331888 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd7f5f54-06dd-4044-92ff-20df135afdfd" path="/var/lib/kubelet/pods/dd7f5f54-06dd-4044-92ff-20df135afdfd/volumes" Jan 28 16:05:13 crc kubenswrapper[4981]: I0128 16:05:13.440999 4981 scope.go:117] "RemoveContainer" containerID="e1fd90660541e7513e1c325588fbbf2ba7f6a7e98366f2d27054168ad21fafba" Jan 28 16:05:13 crc kubenswrapper[4981]: I0128 16:05:13.441042 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/crc-debug-4kq2h" Jan 28 16:05:13 crc kubenswrapper[4981]: I0128 16:05:13.764587 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dk42d/crc-debug-k74ql"] Jan 28 16:05:13 crc kubenswrapper[4981]: E0128 16:05:13.765240 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd7f5f54-06dd-4044-92ff-20df135afdfd" containerName="container-00" Jan 28 16:05:13 crc kubenswrapper[4981]: I0128 16:05:13.765259 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd7f5f54-06dd-4044-92ff-20df135afdfd" containerName="container-00" Jan 28 16:05:13 crc kubenswrapper[4981]: I0128 16:05:13.765453 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd7f5f54-06dd-4044-92ff-20df135afdfd" containerName="container-00" Jan 28 16:05:13 crc kubenswrapper[4981]: I0128 16:05:13.766034 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/crc-debug-k74ql" Jan 28 16:05:13 crc kubenswrapper[4981]: I0128 16:05:13.813879 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvmkw\" (UniqueName: \"kubernetes.io/projected/aae983b5-08dc-4d58-adb6-519aa00b1dda-kube-api-access-jvmkw\") pod \"crc-debug-k74ql\" (UID: \"aae983b5-08dc-4d58-adb6-519aa00b1dda\") " pod="openshift-must-gather-dk42d/crc-debug-k74ql" Jan 28 16:05:13 crc kubenswrapper[4981]: I0128 16:05:13.814052 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aae983b5-08dc-4d58-adb6-519aa00b1dda-host\") pod \"crc-debug-k74ql\" (UID: \"aae983b5-08dc-4d58-adb6-519aa00b1dda\") " pod="openshift-must-gather-dk42d/crc-debug-k74ql" Jan 28 16:05:13 crc kubenswrapper[4981]: I0128 16:05:13.915883 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvmkw\" (UniqueName: \"kubernetes.io/projected/aae983b5-08dc-4d58-adb6-519aa00b1dda-kube-api-access-jvmkw\") pod \"crc-debug-k74ql\" (UID: \"aae983b5-08dc-4d58-adb6-519aa00b1dda\") " pod="openshift-must-gather-dk42d/crc-debug-k74ql" Jan 28 16:05:13 crc kubenswrapper[4981]: I0128 16:05:13.915988 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aae983b5-08dc-4d58-adb6-519aa00b1dda-host\") pod \"crc-debug-k74ql\" (UID: \"aae983b5-08dc-4d58-adb6-519aa00b1dda\") " pod="openshift-must-gather-dk42d/crc-debug-k74ql" Jan 28 16:05:13 crc kubenswrapper[4981]: I0128 16:05:13.916129 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aae983b5-08dc-4d58-adb6-519aa00b1dda-host\") pod \"crc-debug-k74ql\" (UID: \"aae983b5-08dc-4d58-adb6-519aa00b1dda\") " pod="openshift-must-gather-dk42d/crc-debug-k74ql" Jan 28 16:05:13 crc kubenswrapper[4981]: I0128 16:05:13.939707 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvmkw\" (UniqueName: \"kubernetes.io/projected/aae983b5-08dc-4d58-adb6-519aa00b1dda-kube-api-access-jvmkw\") pod \"crc-debug-k74ql\" (UID: \"aae983b5-08dc-4d58-adb6-519aa00b1dda\") " pod="openshift-must-gather-dk42d/crc-debug-k74ql" Jan 28 16:05:14 crc kubenswrapper[4981]: I0128 16:05:14.082991 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/crc-debug-k74ql" Jan 28 16:05:14 crc kubenswrapper[4981]: I0128 16:05:14.450716 4981 generic.go:334] "Generic (PLEG): container finished" podID="aae983b5-08dc-4d58-adb6-519aa00b1dda" containerID="3fed6a3d75941c12f388a1638633146b10fe3d86e6d06e4c2a1e9111cbdc102d" exitCode=0 Jan 28 16:05:14 crc kubenswrapper[4981]: I0128 16:05:14.450767 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dk42d/crc-debug-k74ql" event={"ID":"aae983b5-08dc-4d58-adb6-519aa00b1dda","Type":"ContainerDied","Data":"3fed6a3d75941c12f388a1638633146b10fe3d86e6d06e4c2a1e9111cbdc102d"} Jan 28 16:05:14 crc kubenswrapper[4981]: I0128 16:05:14.450851 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dk42d/crc-debug-k74ql" event={"ID":"aae983b5-08dc-4d58-adb6-519aa00b1dda","Type":"ContainerStarted","Data":"bf4a4c4c447091c6282d21132e89b701c4490a4ee43b799a94be903417c61226"} Jan 28 16:05:14 crc kubenswrapper[4981]: I0128 16:05:14.456062 4981 generic.go:334] "Generic (PLEG): container finished" podID="1ae08581-1e63-411b-a6ab-a5d985571e69" containerID="c4f15eeb5e51d0ae57e3c9820acdc73273e8c30da63ebdd7143e56ffb4eaf3b2" exitCode=0 Jan 28 16:05:14 crc kubenswrapper[4981]: I0128 16:05:14.456104 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7fm8" event={"ID":"1ae08581-1e63-411b-a6ab-a5d985571e69","Type":"ContainerDied","Data":"c4f15eeb5e51d0ae57e3c9820acdc73273e8c30da63ebdd7143e56ffb4eaf3b2"} Jan 28 16:05:15 crc kubenswrapper[4981]: I0128 16:05:15.022322 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dk42d/crc-debug-k74ql"] Jan 28 16:05:15 crc kubenswrapper[4981]: I0128 16:05:15.029594 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dk42d/crc-debug-k74ql"] Jan 28 16:05:15 crc kubenswrapper[4981]: I0128 16:05:15.468352 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7fm8" event={"ID":"1ae08581-1e63-411b-a6ab-a5d985571e69","Type":"ContainerStarted","Data":"a491e79ad61dfda57ac295b92e37f659a53b072bd4f4f7e31eb1453a36dc3d1b"} Jan 28 16:05:15 crc kubenswrapper[4981]: I0128 16:05:15.505844 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n7fm8" podStartSLOduration=3.013911992 podStartE2EDuration="5.505817095s" podCreationTimestamp="2026-01-28 16:05:10 +0000 UTC" firstStartedPulling="2026-01-28 16:05:12.429761161 +0000 UTC m=+3723.881919412" lastFinishedPulling="2026-01-28 16:05:14.921666274 +0000 UTC m=+3726.373824515" observedRunningTime="2026-01-28 16:05:15.504717136 +0000 UTC m=+3726.956875377" watchObservedRunningTime="2026-01-28 16:05:15.505817095 +0000 UTC m=+3726.957975336" Jan 28 16:05:15 crc kubenswrapper[4981]: I0128 16:05:15.579077 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/crc-debug-k74ql" Jan 28 16:05:15 crc kubenswrapper[4981]: I0128 16:05:15.651552 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aae983b5-08dc-4d58-adb6-519aa00b1dda-host\") pod \"aae983b5-08dc-4d58-adb6-519aa00b1dda\" (UID: \"aae983b5-08dc-4d58-adb6-519aa00b1dda\") " Jan 28 16:05:15 crc kubenswrapper[4981]: I0128 16:05:15.651683 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvmkw\" (UniqueName: \"kubernetes.io/projected/aae983b5-08dc-4d58-adb6-519aa00b1dda-kube-api-access-jvmkw\") pod \"aae983b5-08dc-4d58-adb6-519aa00b1dda\" (UID: \"aae983b5-08dc-4d58-adb6-519aa00b1dda\") " Jan 28 16:05:15 crc kubenswrapper[4981]: I0128 16:05:15.651810 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae983b5-08dc-4d58-adb6-519aa00b1dda-host" (OuterVolumeSpecName: "host") pod "aae983b5-08dc-4d58-adb6-519aa00b1dda" (UID: "aae983b5-08dc-4d58-adb6-519aa00b1dda"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:05:15 crc kubenswrapper[4981]: I0128 16:05:15.652236 4981 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/aae983b5-08dc-4d58-adb6-519aa00b1dda-host\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:15 crc kubenswrapper[4981]: I0128 16:05:15.657949 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae983b5-08dc-4d58-adb6-519aa00b1dda-kube-api-access-jvmkw" (OuterVolumeSpecName: "kube-api-access-jvmkw") pod "aae983b5-08dc-4d58-adb6-519aa00b1dda" (UID: "aae983b5-08dc-4d58-adb6-519aa00b1dda"). InnerVolumeSpecName "kube-api-access-jvmkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:15 crc kubenswrapper[4981]: I0128 16:05:15.753649 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvmkw\" (UniqueName: \"kubernetes.io/projected/aae983b5-08dc-4d58-adb6-519aa00b1dda-kube-api-access-jvmkw\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:16 crc kubenswrapper[4981]: I0128 16:05:16.209274 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dk42d/crc-debug-4wlxg"] Jan 28 16:05:16 crc kubenswrapper[4981]: E0128 16:05:16.209902 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae983b5-08dc-4d58-adb6-519aa00b1dda" containerName="container-00" Jan 28 16:05:16 crc kubenswrapper[4981]: I0128 16:05:16.209961 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae983b5-08dc-4d58-adb6-519aa00b1dda" containerName="container-00" Jan 28 16:05:16 crc kubenswrapper[4981]: I0128 16:05:16.210371 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae983b5-08dc-4d58-adb6-519aa00b1dda" containerName="container-00" Jan 28 16:05:16 crc kubenswrapper[4981]: I0128 16:05:16.211564 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/crc-debug-4wlxg" Jan 28 16:05:16 crc kubenswrapper[4981]: I0128 16:05:16.262984 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnxv8\" (UniqueName: \"kubernetes.io/projected/d653964f-ec30-4ceb-8d9a-1af11d750b61-kube-api-access-lnxv8\") pod \"crc-debug-4wlxg\" (UID: \"d653964f-ec30-4ceb-8d9a-1af11d750b61\") " pod="openshift-must-gather-dk42d/crc-debug-4wlxg" Jan 28 16:05:16 crc kubenswrapper[4981]: I0128 16:05:16.263038 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d653964f-ec30-4ceb-8d9a-1af11d750b61-host\") pod \"crc-debug-4wlxg\" (UID: \"d653964f-ec30-4ceb-8d9a-1af11d750b61\") " pod="openshift-must-gather-dk42d/crc-debug-4wlxg" Jan 28 16:05:16 crc kubenswrapper[4981]: I0128 16:05:16.365146 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnxv8\" (UniqueName: \"kubernetes.io/projected/d653964f-ec30-4ceb-8d9a-1af11d750b61-kube-api-access-lnxv8\") pod \"crc-debug-4wlxg\" (UID: \"d653964f-ec30-4ceb-8d9a-1af11d750b61\") " pod="openshift-must-gather-dk42d/crc-debug-4wlxg" Jan 28 16:05:16 crc kubenswrapper[4981]: I0128 16:05:16.365404 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d653964f-ec30-4ceb-8d9a-1af11d750b61-host\") pod \"crc-debug-4wlxg\" (UID: \"d653964f-ec30-4ceb-8d9a-1af11d750b61\") " pod="openshift-must-gather-dk42d/crc-debug-4wlxg" Jan 28 16:05:16 crc kubenswrapper[4981]: I0128 16:05:16.365589 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d653964f-ec30-4ceb-8d9a-1af11d750b61-host\") pod \"crc-debug-4wlxg\" (UID: \"d653964f-ec30-4ceb-8d9a-1af11d750b61\") " pod="openshift-must-gather-dk42d/crc-debug-4wlxg" Jan 28 16:05:16 crc kubenswrapper[4981]: I0128 16:05:16.389984 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnxv8\" (UniqueName: \"kubernetes.io/projected/d653964f-ec30-4ceb-8d9a-1af11d750b61-kube-api-access-lnxv8\") pod \"crc-debug-4wlxg\" (UID: \"d653964f-ec30-4ceb-8d9a-1af11d750b61\") " pod="openshift-must-gather-dk42d/crc-debug-4wlxg" Jan 28 16:05:16 crc kubenswrapper[4981]: I0128 16:05:16.478151 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/crc-debug-k74ql" Jan 28 16:05:16 crc kubenswrapper[4981]: I0128 16:05:16.478164 4981 scope.go:117] "RemoveContainer" containerID="3fed6a3d75941c12f388a1638633146b10fe3d86e6d06e4c2a1e9111cbdc102d" Jan 28 16:05:16 crc kubenswrapper[4981]: I0128 16:05:16.531843 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/crc-debug-4wlxg" Jan 28 16:05:16 crc kubenswrapper[4981]: W0128 16:05:16.567873 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd653964f_ec30_4ceb_8d9a_1af11d750b61.slice/crio-334159b49aa885f8c7722890a75166cb556198a00e5ded97578d3925a7b24214 WatchSource:0}: Error finding container 334159b49aa885f8c7722890a75166cb556198a00e5ded97578d3925a7b24214: Status 404 returned error can't find the container with id 334159b49aa885f8c7722890a75166cb556198a00e5ded97578d3925a7b24214 Jan 28 16:05:17 crc kubenswrapper[4981]: I0128 16:05:17.345794 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae983b5-08dc-4d58-adb6-519aa00b1dda" path="/var/lib/kubelet/pods/aae983b5-08dc-4d58-adb6-519aa00b1dda/volumes" Jan 28 16:05:17 crc kubenswrapper[4981]: I0128 16:05:17.491354 4981 generic.go:334] "Generic (PLEG): container finished" podID="d653964f-ec30-4ceb-8d9a-1af11d750b61" containerID="baf95c4183084cbe455615d42b89ba420db643bc4a0142863bf79a6015598e87" exitCode=0 Jan 28 16:05:17 crc kubenswrapper[4981]: I0128 16:05:17.491389 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dk42d/crc-debug-4wlxg" event={"ID":"d653964f-ec30-4ceb-8d9a-1af11d750b61","Type":"ContainerDied","Data":"baf95c4183084cbe455615d42b89ba420db643bc4a0142863bf79a6015598e87"} Jan 28 16:05:17 crc kubenswrapper[4981]: I0128 16:05:17.491440 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dk42d/crc-debug-4wlxg" event={"ID":"d653964f-ec30-4ceb-8d9a-1af11d750b61","Type":"ContainerStarted","Data":"334159b49aa885f8c7722890a75166cb556198a00e5ded97578d3925a7b24214"} Jan 28 16:05:17 crc kubenswrapper[4981]: I0128 16:05:17.538659 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dk42d/crc-debug-4wlxg"] Jan 28 16:05:17 crc kubenswrapper[4981]: I0128 16:05:17.556390 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dk42d/crc-debug-4wlxg"] Jan 28 16:05:18 crc kubenswrapper[4981]: I0128 16:05:18.604156 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/crc-debug-4wlxg" Jan 28 16:05:18 crc kubenswrapper[4981]: I0128 16:05:18.710605 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d653964f-ec30-4ceb-8d9a-1af11d750b61-host\") pod \"d653964f-ec30-4ceb-8d9a-1af11d750b61\" (UID: \"d653964f-ec30-4ceb-8d9a-1af11d750b61\") " Jan 28 16:05:18 crc kubenswrapper[4981]: I0128 16:05:18.710697 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnxv8\" (UniqueName: \"kubernetes.io/projected/d653964f-ec30-4ceb-8d9a-1af11d750b61-kube-api-access-lnxv8\") pod \"d653964f-ec30-4ceb-8d9a-1af11d750b61\" (UID: \"d653964f-ec30-4ceb-8d9a-1af11d750b61\") " Jan 28 16:05:18 crc kubenswrapper[4981]: I0128 16:05:18.710758 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d653964f-ec30-4ceb-8d9a-1af11d750b61-host" (OuterVolumeSpecName: "host") pod "d653964f-ec30-4ceb-8d9a-1af11d750b61" (UID: "d653964f-ec30-4ceb-8d9a-1af11d750b61"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:05:18 crc kubenswrapper[4981]: I0128 16:05:18.711354 4981 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d653964f-ec30-4ceb-8d9a-1af11d750b61-host\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:18 crc kubenswrapper[4981]: I0128 16:05:18.727381 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d653964f-ec30-4ceb-8d9a-1af11d750b61-kube-api-access-lnxv8" (OuterVolumeSpecName: "kube-api-access-lnxv8") pod "d653964f-ec30-4ceb-8d9a-1af11d750b61" (UID: "d653964f-ec30-4ceb-8d9a-1af11d750b61"). InnerVolumeSpecName "kube-api-access-lnxv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:18 crc kubenswrapper[4981]: I0128 16:05:18.813238 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnxv8\" (UniqueName: \"kubernetes.io/projected/d653964f-ec30-4ceb-8d9a-1af11d750b61-kube-api-access-lnxv8\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:19 crc kubenswrapper[4981]: I0128 16:05:19.329278 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d653964f-ec30-4ceb-8d9a-1af11d750b61" path="/var/lib/kubelet/pods/d653964f-ec30-4ceb-8d9a-1af11d750b61/volumes" Jan 28 16:05:19 crc kubenswrapper[4981]: I0128 16:05:19.507520 4981 scope.go:117] "RemoveContainer" containerID="baf95c4183084cbe455615d42b89ba420db643bc4a0142863bf79a6015598e87" Jan 28 16:05:19 crc kubenswrapper[4981]: I0128 16:05:19.507652 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/crc-debug-4wlxg" Jan 28 16:05:20 crc kubenswrapper[4981]: I0128 16:05:20.773526 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:20 crc kubenswrapper[4981]: I0128 16:05:20.773821 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:20 crc kubenswrapper[4981]: I0128 16:05:20.828622 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:21 crc kubenswrapper[4981]: I0128 16:05:21.577665 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:21 crc kubenswrapper[4981]: I0128 16:05:21.634272 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n7fm8"] Jan 28 16:05:23 crc kubenswrapper[4981]: I0128 16:05:23.556136 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n7fm8" podUID="1ae08581-1e63-411b-a6ab-a5d985571e69" containerName="registry-server" containerID="cri-o://a491e79ad61dfda57ac295b92e37f659a53b072bd4f4f7e31eb1453a36dc3d1b" gracePeriod=2 Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.046489 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.127334 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bz26\" (UniqueName: \"kubernetes.io/projected/1ae08581-1e63-411b-a6ab-a5d985571e69-kube-api-access-5bz26\") pod \"1ae08581-1e63-411b-a6ab-a5d985571e69\" (UID: \"1ae08581-1e63-411b-a6ab-a5d985571e69\") " Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.127454 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ae08581-1e63-411b-a6ab-a5d985571e69-utilities\") pod \"1ae08581-1e63-411b-a6ab-a5d985571e69\" (UID: \"1ae08581-1e63-411b-a6ab-a5d985571e69\") " Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.127500 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ae08581-1e63-411b-a6ab-a5d985571e69-catalog-content\") pod \"1ae08581-1e63-411b-a6ab-a5d985571e69\" (UID: \"1ae08581-1e63-411b-a6ab-a5d985571e69\") " Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.128550 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ae08581-1e63-411b-a6ab-a5d985571e69-utilities" (OuterVolumeSpecName: "utilities") pod "1ae08581-1e63-411b-a6ab-a5d985571e69" (UID: "1ae08581-1e63-411b-a6ab-a5d985571e69"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.134232 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ae08581-1e63-411b-a6ab-a5d985571e69-kube-api-access-5bz26" (OuterVolumeSpecName: "kube-api-access-5bz26") pod "1ae08581-1e63-411b-a6ab-a5d985571e69" (UID: "1ae08581-1e63-411b-a6ab-a5d985571e69"). InnerVolumeSpecName "kube-api-access-5bz26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.180413 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ae08581-1e63-411b-a6ab-a5d985571e69-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ae08581-1e63-411b-a6ab-a5d985571e69" (UID: "1ae08581-1e63-411b-a6ab-a5d985571e69"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.230508 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bz26\" (UniqueName: \"kubernetes.io/projected/1ae08581-1e63-411b-a6ab-a5d985571e69-kube-api-access-5bz26\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.230757 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ae08581-1e63-411b-a6ab-a5d985571e69-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.230819 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ae08581-1e63-411b-a6ab-a5d985571e69-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.566391 4981 generic.go:334] "Generic (PLEG): container finished" podID="1ae08581-1e63-411b-a6ab-a5d985571e69" containerID="a491e79ad61dfda57ac295b92e37f659a53b072bd4f4f7e31eb1453a36dc3d1b" exitCode=0 Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.566433 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n7fm8" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.566437 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7fm8" event={"ID":"1ae08581-1e63-411b-a6ab-a5d985571e69","Type":"ContainerDied","Data":"a491e79ad61dfda57ac295b92e37f659a53b072bd4f4f7e31eb1453a36dc3d1b"} Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.566543 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n7fm8" event={"ID":"1ae08581-1e63-411b-a6ab-a5d985571e69","Type":"ContainerDied","Data":"1a228ecbacd6d071313a3c22f478f9c54e57d22cfb9769684240767539508bd5"} Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.566566 4981 scope.go:117] "RemoveContainer" containerID="a491e79ad61dfda57ac295b92e37f659a53b072bd4f4f7e31eb1453a36dc3d1b" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.593546 4981 scope.go:117] "RemoveContainer" containerID="c4f15eeb5e51d0ae57e3c9820acdc73273e8c30da63ebdd7143e56ffb4eaf3b2" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.602336 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n7fm8"] Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.615084 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n7fm8"] Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.635002 4981 scope.go:117] "RemoveContainer" containerID="fbdd4fb84cf560a6a4630c4a7140e5b22b7489cb88f31f7d61988a50116cf096" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.677645 4981 scope.go:117] "RemoveContainer" containerID="a491e79ad61dfda57ac295b92e37f659a53b072bd4f4f7e31eb1453a36dc3d1b" Jan 28 16:05:24 crc kubenswrapper[4981]: E0128 16:05:24.678609 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a491e79ad61dfda57ac295b92e37f659a53b072bd4f4f7e31eb1453a36dc3d1b\": container with ID starting with a491e79ad61dfda57ac295b92e37f659a53b072bd4f4f7e31eb1453a36dc3d1b not found: ID does not exist" containerID="a491e79ad61dfda57ac295b92e37f659a53b072bd4f4f7e31eb1453a36dc3d1b" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.678658 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a491e79ad61dfda57ac295b92e37f659a53b072bd4f4f7e31eb1453a36dc3d1b"} err="failed to get container status \"a491e79ad61dfda57ac295b92e37f659a53b072bd4f4f7e31eb1453a36dc3d1b\": rpc error: code = NotFound desc = could not find container \"a491e79ad61dfda57ac295b92e37f659a53b072bd4f4f7e31eb1453a36dc3d1b\": container with ID starting with a491e79ad61dfda57ac295b92e37f659a53b072bd4f4f7e31eb1453a36dc3d1b not found: ID does not exist" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.678684 4981 scope.go:117] "RemoveContainer" containerID="c4f15eeb5e51d0ae57e3c9820acdc73273e8c30da63ebdd7143e56ffb4eaf3b2" Jan 28 16:05:24 crc kubenswrapper[4981]: E0128 16:05:24.685322 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4f15eeb5e51d0ae57e3c9820acdc73273e8c30da63ebdd7143e56ffb4eaf3b2\": container with ID starting with c4f15eeb5e51d0ae57e3c9820acdc73273e8c30da63ebdd7143e56ffb4eaf3b2 not found: ID does not exist" containerID="c4f15eeb5e51d0ae57e3c9820acdc73273e8c30da63ebdd7143e56ffb4eaf3b2" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.685366 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4f15eeb5e51d0ae57e3c9820acdc73273e8c30da63ebdd7143e56ffb4eaf3b2"} err="failed to get container status \"c4f15eeb5e51d0ae57e3c9820acdc73273e8c30da63ebdd7143e56ffb4eaf3b2\": rpc error: code = NotFound desc = could not find container \"c4f15eeb5e51d0ae57e3c9820acdc73273e8c30da63ebdd7143e56ffb4eaf3b2\": container with ID starting with c4f15eeb5e51d0ae57e3c9820acdc73273e8c30da63ebdd7143e56ffb4eaf3b2 not found: ID does not exist" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.685392 4981 scope.go:117] "RemoveContainer" containerID="fbdd4fb84cf560a6a4630c4a7140e5b22b7489cb88f31f7d61988a50116cf096" Jan 28 16:05:24 crc kubenswrapper[4981]: E0128 16:05:24.686021 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbdd4fb84cf560a6a4630c4a7140e5b22b7489cb88f31f7d61988a50116cf096\": container with ID starting with fbdd4fb84cf560a6a4630c4a7140e5b22b7489cb88f31f7d61988a50116cf096 not found: ID does not exist" containerID="fbdd4fb84cf560a6a4630c4a7140e5b22b7489cb88f31f7d61988a50116cf096" Jan 28 16:05:24 crc kubenswrapper[4981]: I0128 16:05:24.686061 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbdd4fb84cf560a6a4630c4a7140e5b22b7489cb88f31f7d61988a50116cf096"} err="failed to get container status \"fbdd4fb84cf560a6a4630c4a7140e5b22b7489cb88f31f7d61988a50116cf096\": rpc error: code = NotFound desc = could not find container \"fbdd4fb84cf560a6a4630c4a7140e5b22b7489cb88f31f7d61988a50116cf096\": container with ID starting with fbdd4fb84cf560a6a4630c4a7140e5b22b7489cb88f31f7d61988a50116cf096 not found: ID does not exist" Jan 28 16:05:25 crc kubenswrapper[4981]: I0128 16:05:25.328263 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ae08581-1e63-411b-a6ab-a5d985571e69" path="/var/lib/kubelet/pods/1ae08581-1e63-411b-a6ab-a5d985571e69/volumes" Jan 28 16:05:34 crc kubenswrapper[4981]: I0128 16:05:34.042850 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f6d8cf6db-mshgs_46c7dad2-c36e-4c3e-80f0-c6e3ec088723/barbican-api/0.log" Jan 28 16:05:34 crc kubenswrapper[4981]: I0128 16:05:34.219782 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f6d8cf6db-mshgs_46c7dad2-c36e-4c3e-80f0-c6e3ec088723/barbican-api-log/0.log" Jan 28 16:05:34 crc kubenswrapper[4981]: I0128 16:05:34.294301 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7bd55659d6-qkw24_9001e7fd-73ee-4169-a239-fa6452ac69d2/barbican-keystone-listener/0.log" Jan 28 16:05:34 crc kubenswrapper[4981]: I0128 16:05:34.357831 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7bd55659d6-qkw24_9001e7fd-73ee-4169-a239-fa6452ac69d2/barbican-keystone-listener-log/0.log" Jan 28 16:05:34 crc kubenswrapper[4981]: I0128 16:05:34.490262 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-695f5b56f5-7h6s9_2e2d1563-14e3-41bc-8830-51e28da77c5e/barbican-worker/0.log" Jan 28 16:05:34 crc kubenswrapper[4981]: I0128 16:05:34.508650 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-695f5b56f5-7h6s9_2e2d1563-14e3-41bc-8830-51e28da77c5e/barbican-worker-log/0.log" Jan 28 16:05:34 crc kubenswrapper[4981]: I0128 16:05:34.651539 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x_c8005a06-6ceb-4918-867e-216081419a3a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:34 crc kubenswrapper[4981]: I0128 16:05:34.737722 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bfb88da5-80c7-481b-89ba-2c5c08c258c0/ceilometer-central-agent/0.log" Jan 28 16:05:34 crc kubenswrapper[4981]: I0128 16:05:34.857350 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bfb88da5-80c7-481b-89ba-2c5c08c258c0/ceilometer-notification-agent/0.log" Jan 28 16:05:34 crc kubenswrapper[4981]: I0128 16:05:34.868241 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bfb88da5-80c7-481b-89ba-2c5c08c258c0/proxy-httpd/0.log" Jan 28 16:05:34 crc kubenswrapper[4981]: I0128 16:05:34.922283 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bfb88da5-80c7-481b-89ba-2c5c08c258c0/sg-core/0.log" Jan 28 16:05:35 crc kubenswrapper[4981]: I0128 16:05:35.072603 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_934e0f8e-1579-4d0e-a34a-53d266c4612a/cinder-api-log/0.log" Jan 28 16:05:35 crc kubenswrapper[4981]: I0128 16:05:35.124944 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_934e0f8e-1579-4d0e-a34a-53d266c4612a/cinder-api/0.log" Jan 28 16:05:35 crc kubenswrapper[4981]: I0128 16:05:35.239410 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8617985e-2166-4325-82d1-6004e7eff07d/cinder-scheduler/0.log" Jan 28 16:05:35 crc kubenswrapper[4981]: I0128 16:05:35.346851 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8617985e-2166-4325-82d1-6004e7eff07d/probe/0.log" Jan 28 16:05:35 crc kubenswrapper[4981]: I0128 16:05:35.401889 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p_f50a5359-8f8b-47bc-a345-c91ace0f611f/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:35 crc kubenswrapper[4981]: I0128 16:05:35.612712 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6_c7607b3a-6cc7-4240-acd3-866b7d39e6be/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:35 crc kubenswrapper[4981]: I0128 16:05:35.636437 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-t7szx_83f911a5-2f1f-4cc2-a2cb-74c94632dd94/init/0.log" Jan 28 16:05:35 crc kubenswrapper[4981]: I0128 16:05:35.872218 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-t7szx_83f911a5-2f1f-4cc2-a2cb-74c94632dd94/init/0.log" Jan 28 16:05:35 crc kubenswrapper[4981]: I0128 16:05:35.893159 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-t7szx_83f911a5-2f1f-4cc2-a2cb-74c94632dd94/dnsmasq-dns/0.log" Jan 28 16:05:35 crc kubenswrapper[4981]: I0128 16:05:35.949982 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7_f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:36 crc kubenswrapper[4981]: I0128 16:05:36.140456 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_7888281b-1740-4d52-9752-cac22c11d44e/glance-log/0.log" Jan 28 16:05:36 crc kubenswrapper[4981]: I0128 16:05:36.173127 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_7888281b-1740-4d52-9752-cac22c11d44e/glance-httpd/0.log" Jan 28 16:05:36 crc kubenswrapper[4981]: I0128 16:05:36.354929 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ce43940a-33fa-4da9-a910-a57dc6230e57/glance-httpd/0.log" Jan 28 16:05:36 crc kubenswrapper[4981]: I0128 16:05:36.368017 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ce43940a-33fa-4da9-a910-a57dc6230e57/glance-log/0.log" Jan 28 16:05:36 crc kubenswrapper[4981]: I0128 16:05:36.517461 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6d9d89fcfb-mwsgh_d02db79a-7f4f-453c-8e92-2e8291f442f1/horizon/0.log" Jan 28 16:05:36 crc kubenswrapper[4981]: I0128 16:05:36.670557 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z_d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:36 crc kubenswrapper[4981]: I0128 16:05:36.885854 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-7q8hj_7dd2589f-e346-4ce7-a193-1e8eac0a2318/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:36 crc kubenswrapper[4981]: I0128 16:05:36.926554 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6d9d89fcfb-mwsgh_d02db79a-7f4f-453c-8e92-2e8291f442f1/horizon-log/0.log" Jan 28 16:05:37 crc kubenswrapper[4981]: I0128 16:05:37.152340 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29493601-rsfpp_46fad62e-3ca9-4842-a2c7-0e0fd654d37f/keystone-cron/0.log" Jan 28 16:05:37 crc kubenswrapper[4981]: I0128 16:05:37.166310 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-569ff6748d-zhgp9_8f290ab1-489a-4b7e-9815-a6bd2a528f5e/keystone-api/0.log" Jan 28 16:05:37 crc kubenswrapper[4981]: I0128 16:05:37.293064 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ead8d0cb-bf17-4ff6-b6ed-65c7205194cc/kube-state-metrics/0.log" Jan 28 16:05:37 crc kubenswrapper[4981]: I0128 16:05:37.444334 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p_e78a3044-c335-4c2f-9fa6-314f2d40ef11/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:37 crc kubenswrapper[4981]: I0128 16:05:37.854430 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-787f455647-ngpww_b73f2a77-d6ea-418e-93a0-9d5a928637eb/neutron-api/0.log" Jan 28 16:05:37 crc kubenswrapper[4981]: I0128 16:05:37.916547 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-787f455647-ngpww_b73f2a77-d6ea-418e-93a0-9d5a928637eb/neutron-httpd/0.log" Jan 28 16:05:38 crc kubenswrapper[4981]: I0128 16:05:38.152059 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp_fa2e6c63-891a-4395-8270-942b5d5f168f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:38 crc kubenswrapper[4981]: I0128 16:05:38.664014 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f11d89fe-23cc-4fe1-b03e-c3c5e3613280/nova-cell0-conductor-conductor/0.log" Jan 28 16:05:38 crc kubenswrapper[4981]: I0128 16:05:38.750273 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c04c7269-f8ca-43e7-a204-d1ab4429f2f5/nova-api-log/0.log" Jan 28 16:05:38 crc kubenswrapper[4981]: I0128 16:05:38.920659 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c04c7269-f8ca-43e7-a204-d1ab4429f2f5/nova-api-api/0.log" Jan 28 16:05:39 crc kubenswrapper[4981]: I0128 16:05:39.001256 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f6994064-73bf-495e-928b-5ef46487e938/nova-cell1-conductor-conductor/0.log" Jan 28 16:05:39 crc kubenswrapper[4981]: I0128 16:05:39.093467 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a7a16e56-277e-47a2-91e2-21a8ec2976db/nova-cell1-novncproxy-novncproxy/0.log" Jan 28 16:05:39 crc kubenswrapper[4981]: I0128 16:05:39.250804 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-gx6fp_d6e35d22-36ba-4506-a8bf-f0a7f539502a/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:39 crc kubenswrapper[4981]: I0128 16:05:39.393303 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c/nova-metadata-log/0.log" Jan 28 16:05:39 crc kubenswrapper[4981]: I0128 16:05:39.642030 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_8baced8e-6e29-4788-a841-d5c7d8a5e294/nova-scheduler-scheduler/0.log" Jan 28 16:05:39 crc kubenswrapper[4981]: I0128 16:05:39.770834 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd/mysql-bootstrap/0.log" Jan 28 16:05:39 crc kubenswrapper[4981]: I0128 16:05:39.964556 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd/galera/0.log" Jan 28 16:05:39 crc kubenswrapper[4981]: I0128 16:05:39.982803 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd/mysql-bootstrap/0.log" Jan 28 16:05:40 crc kubenswrapper[4981]: I0128 16:05:40.147301 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee506ff0-7634-45eb-ac9f-5d5de1b3c40a/mysql-bootstrap/0.log" Jan 28 16:05:40 crc kubenswrapper[4981]: I0128 16:05:40.349261 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee506ff0-7634-45eb-ac9f-5d5de1b3c40a/mysql-bootstrap/0.log" Jan 28 16:05:40 crc kubenswrapper[4981]: I0128 16:05:40.379948 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee506ff0-7634-45eb-ac9f-5d5de1b3c40a/galera/0.log" Jan 28 16:05:40 crc kubenswrapper[4981]: I0128 16:05:40.546769 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6/openstackclient/0.log" Jan 28 16:05:40 crc kubenswrapper[4981]: I0128 16:05:40.593615 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c/nova-metadata-metadata/0.log" Jan 28 16:05:40 crc kubenswrapper[4981]: I0128 16:05:40.707480 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-bnkpb_8109b11f-0a6a-4894-b7f7-c6d46a62570e/ovn-controller/0.log" Jan 28 16:05:40 crc kubenswrapper[4981]: I0128 16:05:40.847899 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-cbsgf_510c52de-24af-4fe2-833d-0990283aa110/openstack-network-exporter/0.log" Jan 28 16:05:40 crc kubenswrapper[4981]: I0128 16:05:40.951805 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c8dt7_124744f1-80e9-4fe2-8889-e13e0033ac84/ovsdb-server-init/0.log" Jan 28 16:05:41 crc kubenswrapper[4981]: I0128 16:05:41.188779 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c8dt7_124744f1-80e9-4fe2-8889-e13e0033ac84/ovsdb-server-init/0.log" Jan 28 16:05:41 crc kubenswrapper[4981]: I0128 16:05:41.253487 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c8dt7_124744f1-80e9-4fe2-8889-e13e0033ac84/ovsdb-server/0.log" Jan 28 16:05:41 crc kubenswrapper[4981]: I0128 16:05:41.259308 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c8dt7_124744f1-80e9-4fe2-8889-e13e0033ac84/ovs-vswitchd/0.log" Jan 28 16:05:41 crc kubenswrapper[4981]: I0128 16:05:41.509008 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-mshfl_66c75472-5f94-47b6-bed5-94306835c5fa/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:41 crc kubenswrapper[4981]: I0128 16:05:41.606631 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_71b20415-1b79-4236-89db-42f2787cc2c2/openstack-network-exporter/0.log" Jan 28 16:05:41 crc kubenswrapper[4981]: I0128 16:05:41.738738 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_71b20415-1b79-4236-89db-42f2787cc2c2/ovn-northd/0.log" Jan 28 16:05:41 crc kubenswrapper[4981]: I0128 16:05:41.966640 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0a6c5d9b-a13a-42e8-9d15-f705822bb088/openstack-network-exporter/0.log" Jan 28 16:05:42 crc kubenswrapper[4981]: I0128 16:05:42.009177 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0a6c5d9b-a13a-42e8-9d15-f705822bb088/ovsdbserver-nb/0.log" Jan 28 16:05:42 crc kubenswrapper[4981]: I0128 16:05:42.162318 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0f9063a3-5cdd-4e55-a714-79db63f3b8b9/ovsdbserver-sb/0.log" Jan 28 16:05:42 crc kubenswrapper[4981]: I0128 16:05:42.236311 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0f9063a3-5cdd-4e55-a714-79db63f3b8b9/openstack-network-exporter/0.log" Jan 28 16:05:42 crc kubenswrapper[4981]: I0128 16:05:42.370361 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77d87cc6cd-znvvw_7e60cad3-42b0-4a56-be02-e4433ea5585f/placement-api/0.log" Jan 28 16:05:42 crc kubenswrapper[4981]: I0128 16:05:42.611179 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8c327a08-ce8c-42f7-b305-cfc8b7f2d644/setup-container/0.log" Jan 28 16:05:42 crc kubenswrapper[4981]: I0128 16:05:42.622040 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77d87cc6cd-znvvw_7e60cad3-42b0-4a56-be02-e4433ea5585f/placement-log/0.log" Jan 28 16:05:42 crc kubenswrapper[4981]: I0128 16:05:42.785707 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8c327a08-ce8c-42f7-b305-cfc8b7f2d644/setup-container/0.log" Jan 28 16:05:42 crc kubenswrapper[4981]: I0128 16:05:42.891396 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8c327a08-ce8c-42f7-b305-cfc8b7f2d644/rabbitmq/0.log" Jan 28 16:05:42 crc kubenswrapper[4981]: I0128 16:05:42.924445 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ead14a8f-5759-4a7c-b8a4-6560131c28d1/setup-container/0.log" Jan 28 16:05:43 crc kubenswrapper[4981]: I0128 16:05:43.207201 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ead14a8f-5759-4a7c-b8a4-6560131c28d1/setup-container/0.log" Jan 28 16:05:43 crc kubenswrapper[4981]: I0128 16:05:43.233697 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6_5e256fd3-d946-40f7-a93d-906351bf73f8/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:43 crc kubenswrapper[4981]: I0128 16:05:43.236679 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ead14a8f-5759-4a7c-b8a4-6560131c28d1/rabbitmq/0.log" Jan 28 16:05:43 crc kubenswrapper[4981]: I0128 16:05:43.487001 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd_4db01d71-54cf-49d3-a603-09ee1687a0d6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:43 crc kubenswrapper[4981]: I0128 16:05:43.498510 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-pp86c_7f96c624-5794-4657-b6b9-00cccf2ac699/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:43 crc kubenswrapper[4981]: I0128 16:05:43.716029 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-jz8bs_605a9090-e629-463f-9119-7229674dccc7/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:43 crc kubenswrapper[4981]: I0128 16:05:43.777955 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gllft_2531e40d-1556-4368-b4db-be4d6364097a/ssh-known-hosts-edpm-deployment/0.log" Jan 28 16:05:44 crc kubenswrapper[4981]: I0128 16:05:44.049129 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d79b67667-4jvlp_f3854c5d-2ac4-48d0-96df-a96b2fa5feb7/proxy-httpd/0.log" Jan 28 16:05:44 crc kubenswrapper[4981]: I0128 16:05:44.066297 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d79b67667-4jvlp_f3854c5d-2ac4-48d0-96df-a96b2fa5feb7/proxy-server/0.log" Jan 28 16:05:44 crc kubenswrapper[4981]: I0128 16:05:44.176180 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-zs4xh_a59295bc-49fa-4b41-b2a1-3c19c27292e5/swift-ring-rebalance/0.log" Jan 28 16:05:44 crc kubenswrapper[4981]: I0128 16:05:44.389250 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/account-reaper/0.log" Jan 28 16:05:44 crc kubenswrapper[4981]: I0128 16:05:44.397997 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/account-auditor/0.log" Jan 28 16:05:44 crc kubenswrapper[4981]: I0128 16:05:44.525107 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/account-replicator/0.log" Jan 28 16:05:44 crc kubenswrapper[4981]: I0128 16:05:44.599732 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/account-server/0.log" Jan 28 16:05:44 crc kubenswrapper[4981]: I0128 16:05:44.656623 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/container-auditor/0.log" Jan 28 16:05:44 crc kubenswrapper[4981]: I0128 16:05:44.673775 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/container-replicator/0.log" Jan 28 16:05:44 crc kubenswrapper[4981]: I0128 16:05:44.805935 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/container-server/0.log" Jan 28 16:05:44 crc kubenswrapper[4981]: I0128 16:05:44.942279 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/container-updater/0.log" Jan 28 16:05:45 crc kubenswrapper[4981]: I0128 16:05:45.130431 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/object-auditor/0.log" Jan 28 16:05:45 crc kubenswrapper[4981]: I0128 16:05:45.165455 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/object-expirer/0.log" Jan 28 16:05:45 crc kubenswrapper[4981]: I0128 16:05:45.249611 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/object-replicator/0.log" Jan 28 16:05:45 crc kubenswrapper[4981]: I0128 16:05:45.342512 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/object-server/0.log" Jan 28 16:05:45 crc kubenswrapper[4981]: I0128 16:05:45.406160 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/rsync/0.log" Jan 28 16:05:45 crc kubenswrapper[4981]: I0128 16:05:45.407136 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/object-updater/0.log" Jan 28 16:05:45 crc kubenswrapper[4981]: I0128 16:05:45.510028 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/swift-recon-cron/0.log" Jan 28 16:05:45 crc kubenswrapper[4981]: I0128 16:05:45.768393 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj_b2a912b2-d6a7-4cd5-8cba-66b942182410/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:45 crc kubenswrapper[4981]: I0128 16:05:45.786726 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_c34b143a-0284-461d-a788-106a5f6dca6c/tempest-tests-tempest-tests-runner/0.log" Jan 28 16:05:46 crc kubenswrapper[4981]: I0128 16:05:46.007890 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b9d341b2-c188-4cb6-a39a-0313e67fac6e/test-operator-logs-container/0.log" Jan 28 16:05:46 crc kubenswrapper[4981]: I0128 16:05:46.058624 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-ckpts_f654c5ca-b187-484f-b9bd-c487bda39586/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:05:55 crc kubenswrapper[4981]: I0128 16:05:55.136419 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3bab2457-dbba-4fa0-b0c7-0b05a9546bc6/memcached/0.log" Jan 28 16:06:15 crc kubenswrapper[4981]: I0128 16:06:15.574260 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd_1c96985c-93d1-4967-83e7-0794b3159ca9/util/0.log" Jan 28 16:06:15 crc kubenswrapper[4981]: I0128 16:06:15.720855 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd_1c96985c-93d1-4967-83e7-0794b3159ca9/util/0.log" Jan 28 16:06:15 crc kubenswrapper[4981]: I0128 16:06:15.760664 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd_1c96985c-93d1-4967-83e7-0794b3159ca9/pull/0.log" Jan 28 16:06:15 crc kubenswrapper[4981]: I0128 16:06:15.819836 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd_1c96985c-93d1-4967-83e7-0794b3159ca9/pull/0.log" Jan 28 16:06:16 crc kubenswrapper[4981]: I0128 16:06:16.013688 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd_1c96985c-93d1-4967-83e7-0794b3159ca9/util/0.log" Jan 28 16:06:16 crc kubenswrapper[4981]: I0128 16:06:16.014417 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd_1c96985c-93d1-4967-83e7-0794b3159ca9/extract/0.log" Jan 28 16:06:16 crc kubenswrapper[4981]: I0128 16:06:16.049368 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd_1c96985c-93d1-4967-83e7-0794b3159ca9/pull/0.log" Jan 28 16:06:16 crc kubenswrapper[4981]: I0128 16:06:16.260658 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-gmjsg_5d37dcb3-e31a-40e3-ba16-803490369e86/manager/0.log" Jan 28 16:06:16 crc kubenswrapper[4981]: I0128 16:06:16.332383 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-ddkfp_e2393e50-201d-45e8-96c8-f2bfba6fed7c/manager/0.log" Jan 28 16:06:16 crc kubenswrapper[4981]: I0128 16:06:16.455742 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-gmd8n_0a5805bf-96b2-4893-8811-603eacec1cba/manager/0.log" Jan 28 16:06:16 crc kubenswrapper[4981]: I0128 16:06:16.612224 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-c42cn_94e20f49-bc4f-4bbb-9c67-7f3dc5b925b5/manager/0.log" Jan 28 16:06:16 crc kubenswrapper[4981]: I0128 16:06:16.643743 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-mqpdl_9ea05521-0dfa-4175-b394-1b5e55fc4c7f/manager/0.log" Jan 28 16:06:16 crc kubenswrapper[4981]: I0128 16:06:16.787150 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-4nfrz_26db2e76-9e22-4b02-8c7f-6ae79127ae41/manager/0.log" Jan 28 16:06:17 crc kubenswrapper[4981]: I0128 16:06:17.009969 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-wq22r_e2b38a13-7a0c-4836-9abc-be0e65837eb9/manager/0.log" Jan 28 16:06:17 crc kubenswrapper[4981]: I0128 16:06:17.055880 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-mk8hk_28d521cc-409b-485a-809b-98e3e552c042/manager/0.log" Jan 28 16:06:17 crc kubenswrapper[4981]: I0128 16:06:17.244348 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-pqffr_47340953-e89f-4a20-bbd6-0e25c39b810a/manager/0.log" Jan 28 16:06:17 crc kubenswrapper[4981]: I0128 16:06:17.261550 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-6gnfx_6c02a009-565d-4217-9d71-ca0505f90cb0/manager/0.log" Jan 28 16:06:17 crc kubenswrapper[4981]: I0128 16:06:17.475177 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t_9f95744b-30e0-4d4f-9911-12ca57813aff/manager/0.log" Jan 28 16:06:17 crc kubenswrapper[4981]: I0128 16:06:17.545540 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-wjvvk_05d30f3a-fdc7-4b65-a93b-747718217906/manager/0.log" Jan 28 16:06:17 crc kubenswrapper[4981]: I0128 16:06:17.744396 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-kd8bc_462b383e-f994-4f35-a29c-6be57d7fd20c/manager/0.log" Jan 28 16:06:17 crc kubenswrapper[4981]: I0128 16:06:17.773942 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-7tgrh_305a2f40-90a1-4e46-83a6-0ae818e35157/manager/0.log" Jan 28 16:06:18 crc kubenswrapper[4981]: I0128 16:06:18.012651 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854qllt9_655712aa-6ff8-4f99-ac13-85a3def79e97/manager/0.log" Jan 28 16:06:18 crc kubenswrapper[4981]: I0128 16:06:18.133376 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-86c9dcbc4-gnfc9_16d249a4-617e-4f09-9fca-93b89b337167/operator/0.log" Jan 28 16:06:18 crc kubenswrapper[4981]: I0128 16:06:18.384716 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cpdk7_1d4745c7-b014-492d-936f-d4c430359df3/registry-server/0.log" Jan 28 16:06:18 crc kubenswrapper[4981]: I0128 16:06:18.611451 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-8rf4r_34ddd48f-2151-4df8-af17-70b926965a9e/manager/0.log" Jan 28 16:06:18 crc kubenswrapper[4981]: I0128 16:06:18.772030 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-v4hcv_21991fd6-b7f4-48cc-b372-5e43be416857/manager/0.log" Jan 28 16:06:18 crc kubenswrapper[4981]: I0128 16:06:18.964419 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-xjp7n_ad2c98b1-4994-4602-af9f-6dce33122651/operator/0.log" Jan 28 16:06:19 crc kubenswrapper[4981]: I0128 16:06:19.149682 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-6qxz5_7338b601-fe21-458b-97b8-99977fcdb582/manager/0.log" Jan 28 16:06:19 crc kubenswrapper[4981]: I0128 16:06:19.446493 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-f8ckc_18a8ea11-fca0-4503-a458-90ae9e542401/manager/0.log" Jan 28 16:06:19 crc kubenswrapper[4981]: I0128 16:06:19.460095 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-fcdbf6b45-9f88t_b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74/manager/0.log" Jan 28 16:06:19 crc kubenswrapper[4981]: I0128 16:06:19.505794 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-pj2hb_b4355527-cc7c-436f-a9b0-69f4860f0e36/manager/0.log" Jan 28 16:06:19 crc kubenswrapper[4981]: I0128 16:06:19.663808 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-65v5g_82289c62-674e-483e-ac47-f09b000a0c90/manager/0.log" Jan 28 16:06:19 crc kubenswrapper[4981]: I0128 16:06:19.897961 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:06:19 crc kubenswrapper[4981]: I0128 16:06:19.898021 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:06:39 crc kubenswrapper[4981]: I0128 16:06:39.679562 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-x2wjc_313fb5fa-63ee-4008-9e6c-94adc6fa6e67/control-plane-machine-set-operator/0.log" Jan 28 16:06:39 crc kubenswrapper[4981]: I0128 16:06:39.834208 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rb46f_7bc7864e-dc24-4885-b829-e9ee56d0bb2a/kube-rbac-proxy/0.log" Jan 28 16:06:39 crc kubenswrapper[4981]: I0128 16:06:39.872345 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rb46f_7bc7864e-dc24-4885-b829-e9ee56d0bb2a/machine-api-operator/0.log" Jan 28 16:06:49 crc kubenswrapper[4981]: I0128 16:06:49.897959 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:06:49 crc kubenswrapper[4981]: I0128 16:06:49.898732 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:06:53 crc kubenswrapper[4981]: I0128 16:06:53.338824 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-rds8t_275dc545-afa7-4d22-9c2e-bc41e21e187f/cert-manager-controller/0.log" Jan 28 16:06:53 crc kubenswrapper[4981]: I0128 16:06:53.452390 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-lzqwx_531e0ce3-f8d2-423f-8934-5427dca677c8/cert-manager-cainjector/0.log" Jan 28 16:06:53 crc kubenswrapper[4981]: I0128 16:06:53.536223 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-m9hjc_499b468f-8150-49ce-9ec6-964f94f1234d/cert-manager-webhook/0.log" Jan 28 16:07:05 crc kubenswrapper[4981]: I0128 16:07:05.225213 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-mnsnb_e4078e50-5cc6-45b4-8a9a-3a37c51537fa/nmstate-console-plugin/0.log" Jan 28 16:07:05 crc kubenswrapper[4981]: I0128 16:07:05.527829 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-kggn8_ee7ee978-971b-4e70-ac41-8a6c8f10b226/nmstate-handler/0.log" Jan 28 16:07:05 crc kubenswrapper[4981]: I0128 16:07:05.541112 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-lps8k_802eac95-d452-45f0-b0a2-765f410e4a6c/kube-rbac-proxy/0.log" Jan 28 16:07:05 crc kubenswrapper[4981]: I0128 16:07:05.659903 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-lps8k_802eac95-d452-45f0-b0a2-765f410e4a6c/nmstate-metrics/0.log" Jan 28 16:07:05 crc kubenswrapper[4981]: I0128 16:07:05.716713 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-m7qf9_44612930-8e0e-4893-9f15-58b828449dbb/nmstate-operator/0.log" Jan 28 16:07:05 crc kubenswrapper[4981]: I0128 16:07:05.840498 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-knc2t_efc29e7c-2d98-40bb-8335-5c763f217be4/nmstate-webhook/0.log" Jan 28 16:07:19 crc kubenswrapper[4981]: I0128 16:07:19.898012 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:07:19 crc kubenswrapper[4981]: I0128 16:07:19.898482 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:07:19 crc kubenswrapper[4981]: I0128 16:07:19.898528 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 16:07:19 crc kubenswrapper[4981]: I0128 16:07:19.899105 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:07:19 crc kubenswrapper[4981]: I0128 16:07:19.899156 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" gracePeriod=600 Jan 28 16:07:20 crc kubenswrapper[4981]: E0128 16:07:20.030756 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:07:20 crc kubenswrapper[4981]: I0128 16:07:20.842733 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" exitCode=0 Jan 28 16:07:20 crc kubenswrapper[4981]: I0128 16:07:20.842800 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539"} Jan 28 16:07:20 crc kubenswrapper[4981]: I0128 16:07:20.842863 4981 scope.go:117] "RemoveContainer" containerID="39e0a59a118d98c8222e77fa5717ab9ada8940cd17d88d2684c70c105b80474d" Jan 28 16:07:20 crc kubenswrapper[4981]: I0128 16:07:20.843878 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:07:20 crc kubenswrapper[4981]: E0128 16:07:20.844306 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:07:32 crc kubenswrapper[4981]: I0128 16:07:32.105296 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rxw2q_de4f8121-43f6-4041-873d-2c13aca10ed9/kube-rbac-proxy/0.log" Jan 28 16:07:32 crc kubenswrapper[4981]: I0128 16:07:32.205140 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rxw2q_de4f8121-43f6-4041-873d-2c13aca10ed9/controller/0.log" Jan 28 16:07:32 crc kubenswrapper[4981]: I0128 16:07:32.362340 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-frr-files/0.log" Jan 28 16:07:32 crc kubenswrapper[4981]: I0128 16:07:32.494008 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-frr-files/0.log" Jan 28 16:07:32 crc kubenswrapper[4981]: I0128 16:07:32.505886 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-metrics/0.log" Jan 28 16:07:32 crc kubenswrapper[4981]: I0128 16:07:32.505910 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-reloader/0.log" Jan 28 16:07:32 crc kubenswrapper[4981]: I0128 16:07:32.590919 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-reloader/0.log" Jan 28 16:07:32 crc kubenswrapper[4981]: I0128 16:07:32.720163 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-frr-files/0.log" Jan 28 16:07:32 crc kubenswrapper[4981]: I0128 16:07:32.740915 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-metrics/0.log" Jan 28 16:07:32 crc kubenswrapper[4981]: I0128 16:07:32.756843 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-reloader/0.log" Jan 28 16:07:32 crc kubenswrapper[4981]: I0128 16:07:32.786600 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-metrics/0.log" Jan 28 16:07:32 crc kubenswrapper[4981]: I0128 16:07:32.971540 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-metrics/0.log" Jan 28 16:07:32 crc kubenswrapper[4981]: I0128 16:07:32.997915 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-frr-files/0.log" Jan 28 16:07:33 crc kubenswrapper[4981]: I0128 16:07:33.004055 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/controller/0.log" Jan 28 16:07:33 crc kubenswrapper[4981]: I0128 16:07:33.024765 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-reloader/0.log" Jan 28 16:07:33 crc kubenswrapper[4981]: I0128 16:07:33.183622 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/kube-rbac-proxy/0.log" Jan 28 16:07:33 crc kubenswrapper[4981]: I0128 16:07:33.194106 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/kube-rbac-proxy-frr/0.log" Jan 28 16:07:33 crc kubenswrapper[4981]: I0128 16:07:33.208206 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/frr-metrics/0.log" Jan 28 16:07:33 crc kubenswrapper[4981]: I0128 16:07:33.380621 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/reloader/0.log" Jan 28 16:07:33 crc kubenswrapper[4981]: I0128 16:07:33.458036 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-ksslx_c0df8723-60c7-4731-8420-e3279d5f1fce/frr-k8s-webhook-server/0.log" Jan 28 16:07:33 crc kubenswrapper[4981]: I0128 16:07:33.672106 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-599f895949-pcz4b_0a0d4786-f200-41af-b16c-23528e0537dd/manager/0.log" Jan 28 16:07:33 crc kubenswrapper[4981]: I0128 16:07:33.868929 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-59465cf79b-kmxjc_bd39e6ba-068e-4ce1-936b-15b3c003cd04/webhook-server/0.log" Jan 28 16:07:33 crc kubenswrapper[4981]: I0128 16:07:33.948263 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-86r4q_50c5243a-5761-47df-b450-770a6522770c/kube-rbac-proxy/0.log" Jan 28 16:07:34 crc kubenswrapper[4981]: I0128 16:07:34.573976 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-86r4q_50c5243a-5761-47df-b450-770a6522770c/speaker/0.log" Jan 28 16:07:34 crc kubenswrapper[4981]: I0128 16:07:34.675028 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/frr/0.log" Jan 28 16:07:35 crc kubenswrapper[4981]: I0128 16:07:35.318309 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:07:35 crc kubenswrapper[4981]: E0128 16:07:35.318810 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:07:47 crc kubenswrapper[4981]: I0128 16:07:47.187461 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg_34033ece-4d02-4648-9025-0642096f42d3/util/0.log" Jan 28 16:07:47 crc kubenswrapper[4981]: I0128 16:07:47.452851 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg_34033ece-4d02-4648-9025-0642096f42d3/pull/0.log" Jan 28 16:07:47 crc kubenswrapper[4981]: I0128 16:07:47.496767 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg_34033ece-4d02-4648-9025-0642096f42d3/pull/0.log" Jan 28 16:07:47 crc kubenswrapper[4981]: I0128 16:07:47.515045 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg_34033ece-4d02-4648-9025-0642096f42d3/util/0.log" Jan 28 16:07:47 crc kubenswrapper[4981]: I0128 16:07:47.628221 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg_34033ece-4d02-4648-9025-0642096f42d3/pull/0.log" Jan 28 16:07:47 crc kubenswrapper[4981]: I0128 16:07:47.630581 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg_34033ece-4d02-4648-9025-0642096f42d3/util/0.log" Jan 28 16:07:47 crc kubenswrapper[4981]: I0128 16:07:47.683391 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg_34033ece-4d02-4648-9025-0642096f42d3/extract/0.log" Jan 28 16:07:47 crc kubenswrapper[4981]: I0128 16:07:47.804308 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl_35228b73-1ad1-4fa7-9470-ba0f42f71c3f/util/0.log" Jan 28 16:07:47 crc kubenswrapper[4981]: I0128 16:07:47.989524 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl_35228b73-1ad1-4fa7-9470-ba0f42f71c3f/util/0.log" Jan 28 16:07:47 crc kubenswrapper[4981]: I0128 16:07:47.998765 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl_35228b73-1ad1-4fa7-9470-ba0f42f71c3f/pull/0.log" Jan 28 16:07:47 crc kubenswrapper[4981]: I0128 16:07:47.999222 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl_35228b73-1ad1-4fa7-9470-ba0f42f71c3f/pull/0.log" Jan 28 16:07:48 crc kubenswrapper[4981]: I0128 16:07:48.171937 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl_35228b73-1ad1-4fa7-9470-ba0f42f71c3f/util/0.log" Jan 28 16:07:48 crc kubenswrapper[4981]: I0128 16:07:48.223221 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl_35228b73-1ad1-4fa7-9470-ba0f42f71c3f/pull/0.log" Jan 28 16:07:48 crc kubenswrapper[4981]: I0128 16:07:48.229699 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl_35228b73-1ad1-4fa7-9470-ba0f42f71c3f/extract/0.log" Jan 28 16:07:48 crc kubenswrapper[4981]: I0128 16:07:48.348359 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bcv2k_386dca69-5d28-4d18-899e-7fd92d5eb6ad/extract-utilities/0.log" Jan 28 16:07:48 crc kubenswrapper[4981]: I0128 16:07:48.531324 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bcv2k_386dca69-5d28-4d18-899e-7fd92d5eb6ad/extract-utilities/0.log" Jan 28 16:07:48 crc kubenswrapper[4981]: I0128 16:07:48.536098 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bcv2k_386dca69-5d28-4d18-899e-7fd92d5eb6ad/extract-content/0.log" Jan 28 16:07:48 crc kubenswrapper[4981]: I0128 16:07:48.552112 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bcv2k_386dca69-5d28-4d18-899e-7fd92d5eb6ad/extract-content/0.log" Jan 28 16:07:48 crc kubenswrapper[4981]: I0128 16:07:48.752528 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bcv2k_386dca69-5d28-4d18-899e-7fd92d5eb6ad/extract-content/0.log" Jan 28 16:07:48 crc kubenswrapper[4981]: I0128 16:07:48.780292 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bcv2k_386dca69-5d28-4d18-899e-7fd92d5eb6ad/extract-utilities/0.log" Jan 28 16:07:48 crc kubenswrapper[4981]: I0128 16:07:48.970449 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6hpb_baebe073-e075-4f9f-98aa-d1fbe2e55934/extract-utilities/0.log" Jan 28 16:07:49 crc kubenswrapper[4981]: I0128 16:07:49.139021 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6hpb_baebe073-e075-4f9f-98aa-d1fbe2e55934/extract-utilities/0.log" Jan 28 16:07:49 crc kubenswrapper[4981]: I0128 16:07:49.174225 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6hpb_baebe073-e075-4f9f-98aa-d1fbe2e55934/extract-content/0.log" Jan 28 16:07:49 crc kubenswrapper[4981]: I0128 16:07:49.200736 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6hpb_baebe073-e075-4f9f-98aa-d1fbe2e55934/extract-content/0.log" Jan 28 16:07:49 crc kubenswrapper[4981]: I0128 16:07:49.329934 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:07:49 crc kubenswrapper[4981]: E0128 16:07:49.330138 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:07:49 crc kubenswrapper[4981]: I0128 16:07:49.335314 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6hpb_baebe073-e075-4f9f-98aa-d1fbe2e55934/extract-utilities/0.log" Jan 28 16:07:49 crc kubenswrapper[4981]: I0128 16:07:49.439693 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6hpb_baebe073-e075-4f9f-98aa-d1fbe2e55934/extract-content/0.log" Jan 28 16:07:49 crc kubenswrapper[4981]: I0128 16:07:49.558743 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bcv2k_386dca69-5d28-4d18-899e-7fd92d5eb6ad/registry-server/0.log" Jan 28 16:07:49 crc kubenswrapper[4981]: I0128 16:07:49.642223 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-49xjs_8b6270bb-35bf-4292-b065-b6572531a590/marketplace-operator/0.log" Jan 28 16:07:49 crc kubenswrapper[4981]: I0128 16:07:49.854764 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-njnj4_b6349813-dabe-4141-9e83-8d8a99458444/extract-utilities/0.log" Jan 28 16:07:50 crc kubenswrapper[4981]: I0128 16:07:50.046895 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6hpb_baebe073-e075-4f9f-98aa-d1fbe2e55934/registry-server/0.log" Jan 28 16:07:50 crc kubenswrapper[4981]: I0128 16:07:50.049695 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-njnj4_b6349813-dabe-4141-9e83-8d8a99458444/extract-utilities/0.log" Jan 28 16:07:50 crc kubenswrapper[4981]: I0128 16:07:50.355818 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-njnj4_b6349813-dabe-4141-9e83-8d8a99458444/extract-content/0.log" Jan 28 16:07:50 crc kubenswrapper[4981]: I0128 16:07:50.374159 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-njnj4_b6349813-dabe-4141-9e83-8d8a99458444/extract-content/0.log" Jan 28 16:07:50 crc kubenswrapper[4981]: I0128 16:07:50.527764 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-njnj4_b6349813-dabe-4141-9e83-8d8a99458444/extract-utilities/0.log" Jan 28 16:07:50 crc kubenswrapper[4981]: I0128 16:07:50.533146 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-njnj4_b6349813-dabe-4141-9e83-8d8a99458444/extract-content/0.log" Jan 28 16:07:50 crc kubenswrapper[4981]: I0128 16:07:50.718549 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wl8c9_f6175e49-f0fc-41d3-b750-e9e1b6cbca02/extract-utilities/0.log" Jan 28 16:07:50 crc kubenswrapper[4981]: I0128 16:07:50.797059 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-njnj4_b6349813-dabe-4141-9e83-8d8a99458444/registry-server/0.log" Jan 28 16:07:50 crc kubenswrapper[4981]: I0128 16:07:50.998954 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wl8c9_f6175e49-f0fc-41d3-b750-e9e1b6cbca02/extract-utilities/0.log" Jan 28 16:07:51 crc kubenswrapper[4981]: I0128 16:07:51.013530 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wl8c9_f6175e49-f0fc-41d3-b750-e9e1b6cbca02/extract-content/0.log" Jan 28 16:07:51 crc kubenswrapper[4981]: I0128 16:07:51.051632 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wl8c9_f6175e49-f0fc-41d3-b750-e9e1b6cbca02/extract-content/0.log" Jan 28 16:07:51 crc kubenswrapper[4981]: I0128 16:07:51.235823 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wl8c9_f6175e49-f0fc-41d3-b750-e9e1b6cbca02/extract-content/0.log" Jan 28 16:07:51 crc kubenswrapper[4981]: I0128 16:07:51.236378 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wl8c9_f6175e49-f0fc-41d3-b750-e9e1b6cbca02/extract-utilities/0.log" Jan 28 16:07:51 crc kubenswrapper[4981]: I0128 16:07:51.401690 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wl8c9_f6175e49-f0fc-41d3-b750-e9e1b6cbca02/registry-server/0.log" Jan 28 16:07:58 crc kubenswrapper[4981]: I0128 16:07:58.860564 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h5dwq"] Jan 28 16:07:58 crc kubenswrapper[4981]: E0128 16:07:58.862624 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae08581-1e63-411b-a6ab-a5d985571e69" containerName="extract-content" Jan 28 16:07:58 crc kubenswrapper[4981]: I0128 16:07:58.862728 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae08581-1e63-411b-a6ab-a5d985571e69" containerName="extract-content" Jan 28 16:07:58 crc kubenswrapper[4981]: E0128 16:07:58.862833 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae08581-1e63-411b-a6ab-a5d985571e69" containerName="extract-utilities" Jan 28 16:07:58 crc kubenswrapper[4981]: I0128 16:07:58.862893 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae08581-1e63-411b-a6ab-a5d985571e69" containerName="extract-utilities" Jan 28 16:07:58 crc kubenswrapper[4981]: E0128 16:07:58.862976 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d653964f-ec30-4ceb-8d9a-1af11d750b61" containerName="container-00" Jan 28 16:07:58 crc kubenswrapper[4981]: I0128 16:07:58.863039 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="d653964f-ec30-4ceb-8d9a-1af11d750b61" containerName="container-00" Jan 28 16:07:58 crc kubenswrapper[4981]: E0128 16:07:58.863105 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ae08581-1e63-411b-a6ab-a5d985571e69" containerName="registry-server" Jan 28 16:07:58 crc kubenswrapper[4981]: I0128 16:07:58.863174 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ae08581-1e63-411b-a6ab-a5d985571e69" containerName="registry-server" Jan 28 16:07:58 crc kubenswrapper[4981]: I0128 16:07:58.863700 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="d653964f-ec30-4ceb-8d9a-1af11d750b61" containerName="container-00" Jan 28 16:07:58 crc kubenswrapper[4981]: I0128 16:07:58.863786 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ae08581-1e63-411b-a6ab-a5d985571e69" containerName="registry-server" Jan 28 16:07:58 crc kubenswrapper[4981]: I0128 16:07:58.865329 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:07:58 crc kubenswrapper[4981]: I0128 16:07:58.876471 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h5dwq"] Jan 28 16:07:58 crc kubenswrapper[4981]: I0128 16:07:58.941400 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b665308a-721b-4196-b9d9-82bde4e903ae-catalog-content\") pod \"redhat-operators-h5dwq\" (UID: \"b665308a-721b-4196-b9d9-82bde4e903ae\") " pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:07:58 crc kubenswrapper[4981]: I0128 16:07:58.941489 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b665308a-721b-4196-b9d9-82bde4e903ae-utilities\") pod \"redhat-operators-h5dwq\" (UID: \"b665308a-721b-4196-b9d9-82bde4e903ae\") " pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:07:58 crc kubenswrapper[4981]: I0128 16:07:58.941533 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f672g\" (UniqueName: \"kubernetes.io/projected/b665308a-721b-4196-b9d9-82bde4e903ae-kube-api-access-f672g\") pod \"redhat-operators-h5dwq\" (UID: \"b665308a-721b-4196-b9d9-82bde4e903ae\") " pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:07:59 crc kubenswrapper[4981]: I0128 16:07:59.042960 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b665308a-721b-4196-b9d9-82bde4e903ae-utilities\") pod \"redhat-operators-h5dwq\" (UID: \"b665308a-721b-4196-b9d9-82bde4e903ae\") " pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:07:59 crc kubenswrapper[4981]: I0128 16:07:59.043023 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f672g\" (UniqueName: \"kubernetes.io/projected/b665308a-721b-4196-b9d9-82bde4e903ae-kube-api-access-f672g\") pod \"redhat-operators-h5dwq\" (UID: \"b665308a-721b-4196-b9d9-82bde4e903ae\") " pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:07:59 crc kubenswrapper[4981]: I0128 16:07:59.043115 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b665308a-721b-4196-b9d9-82bde4e903ae-catalog-content\") pod \"redhat-operators-h5dwq\" (UID: \"b665308a-721b-4196-b9d9-82bde4e903ae\") " pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:07:59 crc kubenswrapper[4981]: I0128 16:07:59.043629 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b665308a-721b-4196-b9d9-82bde4e903ae-utilities\") pod \"redhat-operators-h5dwq\" (UID: \"b665308a-721b-4196-b9d9-82bde4e903ae\") " pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:07:59 crc kubenswrapper[4981]: I0128 16:07:59.043962 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b665308a-721b-4196-b9d9-82bde4e903ae-catalog-content\") pod \"redhat-operators-h5dwq\" (UID: \"b665308a-721b-4196-b9d9-82bde4e903ae\") " pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:07:59 crc kubenswrapper[4981]: I0128 16:07:59.063551 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f672g\" (UniqueName: \"kubernetes.io/projected/b665308a-721b-4196-b9d9-82bde4e903ae-kube-api-access-f672g\") pod \"redhat-operators-h5dwq\" (UID: \"b665308a-721b-4196-b9d9-82bde4e903ae\") " pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:07:59 crc kubenswrapper[4981]: I0128 16:07:59.251524 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:07:59 crc kubenswrapper[4981]: I0128 16:07:59.704583 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h5dwq"] Jan 28 16:08:00 crc kubenswrapper[4981]: I0128 16:08:00.197258 4981 generic.go:334] "Generic (PLEG): container finished" podID="b665308a-721b-4196-b9d9-82bde4e903ae" containerID="32bcb3f41bfdf8588f6d15c1a9ec08751db66d1ce27cb61d76b6247f7ec727af" exitCode=0 Jan 28 16:08:00 crc kubenswrapper[4981]: I0128 16:08:00.197462 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5dwq" event={"ID":"b665308a-721b-4196-b9d9-82bde4e903ae","Type":"ContainerDied","Data":"32bcb3f41bfdf8588f6d15c1a9ec08751db66d1ce27cb61d76b6247f7ec727af"} Jan 28 16:08:00 crc kubenswrapper[4981]: I0128 16:08:00.197570 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5dwq" event={"ID":"b665308a-721b-4196-b9d9-82bde4e903ae","Type":"ContainerStarted","Data":"81f8aafcc40e3d8acb09794404f0841ae3b1d8e3d6fc682dc8ebf78451eb5aab"} Jan 28 16:08:00 crc kubenswrapper[4981]: I0128 16:08:00.201598 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:08:01 crc kubenswrapper[4981]: I0128 16:08:01.319565 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:08:01 crc kubenswrapper[4981]: E0128 16:08:01.320407 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:08:04 crc kubenswrapper[4981]: I0128 16:08:04.233126 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5dwq" event={"ID":"b665308a-721b-4196-b9d9-82bde4e903ae","Type":"ContainerStarted","Data":"0eb2f8487813cdc4be8dcc0c72bd0a28c29458919ceae352be56812f669c34b8"} Jan 28 16:08:15 crc kubenswrapper[4981]: I0128 16:08:15.319082 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:08:15 crc kubenswrapper[4981]: E0128 16:08:15.319805 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:08:15 crc kubenswrapper[4981]: I0128 16:08:15.323922 4981 generic.go:334] "Generic (PLEG): container finished" podID="b665308a-721b-4196-b9d9-82bde4e903ae" containerID="0eb2f8487813cdc4be8dcc0c72bd0a28c29458919ceae352be56812f669c34b8" exitCode=0 Jan 28 16:08:15 crc kubenswrapper[4981]: I0128 16:08:15.328174 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5dwq" event={"ID":"b665308a-721b-4196-b9d9-82bde4e903ae","Type":"ContainerDied","Data":"0eb2f8487813cdc4be8dcc0c72bd0a28c29458919ceae352be56812f669c34b8"} Jan 28 16:08:16 crc kubenswrapper[4981]: I0128 16:08:16.335649 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5dwq" event={"ID":"b665308a-721b-4196-b9d9-82bde4e903ae","Type":"ContainerStarted","Data":"e489e2fdc7f6fe4d2480f86a5057e4d7c62dfdd4cdc16d27f8584f2666cce898"} Jan 28 16:08:16 crc kubenswrapper[4981]: I0128 16:08:16.363618 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h5dwq" podStartSLOduration=2.612177369 podStartE2EDuration="18.363598898s" podCreationTimestamp="2026-01-28 16:07:58 +0000 UTC" firstStartedPulling="2026-01-28 16:08:00.201350428 +0000 UTC m=+3891.653508669" lastFinishedPulling="2026-01-28 16:08:15.952771957 +0000 UTC m=+3907.404930198" observedRunningTime="2026-01-28 16:08:16.354396494 +0000 UTC m=+3907.806554735" watchObservedRunningTime="2026-01-28 16:08:16.363598898 +0000 UTC m=+3907.815757129" Jan 28 16:08:19 crc kubenswrapper[4981]: I0128 16:08:19.252395 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:08:19 crc kubenswrapper[4981]: I0128 16:08:19.252741 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:08:20 crc kubenswrapper[4981]: I0128 16:08:20.310682 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h5dwq" podUID="b665308a-721b-4196-b9d9-82bde4e903ae" containerName="registry-server" probeResult="failure" output=< Jan 28 16:08:20 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Jan 28 16:08:20 crc kubenswrapper[4981]: > Jan 28 16:08:26 crc kubenswrapper[4981]: I0128 16:08:26.319216 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:08:26 crc kubenswrapper[4981]: E0128 16:08:26.320076 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:08:30 crc kubenswrapper[4981]: I0128 16:08:30.312871 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h5dwq" podUID="b665308a-721b-4196-b9d9-82bde4e903ae" containerName="registry-server" probeResult="failure" output=< Jan 28 16:08:30 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Jan 28 16:08:30 crc kubenswrapper[4981]: > Jan 28 16:08:38 crc kubenswrapper[4981]: I0128 16:08:38.321987 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:08:38 crc kubenswrapper[4981]: E0128 16:08:38.323752 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:08:40 crc kubenswrapper[4981]: I0128 16:08:40.321367 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h5dwq" podUID="b665308a-721b-4196-b9d9-82bde4e903ae" containerName="registry-server" probeResult="failure" output=< Jan 28 16:08:40 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Jan 28 16:08:40 crc kubenswrapper[4981]: > Jan 28 16:08:49 crc kubenswrapper[4981]: I0128 16:08:49.337765 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:08:49 crc kubenswrapper[4981]: I0128 16:08:49.425078 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:08:49 crc kubenswrapper[4981]: I0128 16:08:49.578964 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h5dwq"] Jan 28 16:08:50 crc kubenswrapper[4981]: I0128 16:08:50.661593 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h5dwq" podUID="b665308a-721b-4196-b9d9-82bde4e903ae" containerName="registry-server" containerID="cri-o://e489e2fdc7f6fe4d2480f86a5057e4d7c62dfdd4cdc16d27f8584f2666cce898" gracePeriod=2 Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.172788 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.354137 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b665308a-721b-4196-b9d9-82bde4e903ae-utilities\") pod \"b665308a-721b-4196-b9d9-82bde4e903ae\" (UID: \"b665308a-721b-4196-b9d9-82bde4e903ae\") " Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.354764 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f672g\" (UniqueName: \"kubernetes.io/projected/b665308a-721b-4196-b9d9-82bde4e903ae-kube-api-access-f672g\") pod \"b665308a-721b-4196-b9d9-82bde4e903ae\" (UID: \"b665308a-721b-4196-b9d9-82bde4e903ae\") " Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.354855 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b665308a-721b-4196-b9d9-82bde4e903ae-catalog-content\") pod \"b665308a-721b-4196-b9d9-82bde4e903ae\" (UID: \"b665308a-721b-4196-b9d9-82bde4e903ae\") " Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.355337 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b665308a-721b-4196-b9d9-82bde4e903ae-utilities" (OuterVolumeSpecName: "utilities") pod "b665308a-721b-4196-b9d9-82bde4e903ae" (UID: "b665308a-721b-4196-b9d9-82bde4e903ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.361026 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b665308a-721b-4196-b9d9-82bde4e903ae-kube-api-access-f672g" (OuterVolumeSpecName: "kube-api-access-f672g") pod "b665308a-721b-4196-b9d9-82bde4e903ae" (UID: "b665308a-721b-4196-b9d9-82bde4e903ae"). InnerVolumeSpecName "kube-api-access-f672g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.465420 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b665308a-721b-4196-b9d9-82bde4e903ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.465467 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f672g\" (UniqueName: \"kubernetes.io/projected/b665308a-721b-4196-b9d9-82bde4e903ae-kube-api-access-f672g\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.493415 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b665308a-721b-4196-b9d9-82bde4e903ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b665308a-721b-4196-b9d9-82bde4e903ae" (UID: "b665308a-721b-4196-b9d9-82bde4e903ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.566697 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b665308a-721b-4196-b9d9-82bde4e903ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.676316 4981 generic.go:334] "Generic (PLEG): container finished" podID="b665308a-721b-4196-b9d9-82bde4e903ae" containerID="e489e2fdc7f6fe4d2480f86a5057e4d7c62dfdd4cdc16d27f8584f2666cce898" exitCode=0 Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.676500 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5dwq" event={"ID":"b665308a-721b-4196-b9d9-82bde4e903ae","Type":"ContainerDied","Data":"e489e2fdc7f6fe4d2480f86a5057e4d7c62dfdd4cdc16d27f8584f2666cce898"} Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.676665 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5dwq" event={"ID":"b665308a-721b-4196-b9d9-82bde4e903ae","Type":"ContainerDied","Data":"81f8aafcc40e3d8acb09794404f0841ae3b1d8e3d6fc682dc8ebf78451eb5aab"} Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.676695 4981 scope.go:117] "RemoveContainer" containerID="e489e2fdc7f6fe4d2480f86a5057e4d7c62dfdd4cdc16d27f8584f2666cce898" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.676939 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h5dwq" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.723490 4981 scope.go:117] "RemoveContainer" containerID="0eb2f8487813cdc4be8dcc0c72bd0a28c29458919ceae352be56812f669c34b8" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.753738 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h5dwq"] Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.763354 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h5dwq"] Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.768070 4981 scope.go:117] "RemoveContainer" containerID="32bcb3f41bfdf8588f6d15c1a9ec08751db66d1ce27cb61d76b6247f7ec727af" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.815822 4981 scope.go:117] "RemoveContainer" containerID="e489e2fdc7f6fe4d2480f86a5057e4d7c62dfdd4cdc16d27f8584f2666cce898" Jan 28 16:08:51 crc kubenswrapper[4981]: E0128 16:08:51.818964 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e489e2fdc7f6fe4d2480f86a5057e4d7c62dfdd4cdc16d27f8584f2666cce898\": container with ID starting with e489e2fdc7f6fe4d2480f86a5057e4d7c62dfdd4cdc16d27f8584f2666cce898 not found: ID does not exist" containerID="e489e2fdc7f6fe4d2480f86a5057e4d7c62dfdd4cdc16d27f8584f2666cce898" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.819017 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e489e2fdc7f6fe4d2480f86a5057e4d7c62dfdd4cdc16d27f8584f2666cce898"} err="failed to get container status \"e489e2fdc7f6fe4d2480f86a5057e4d7c62dfdd4cdc16d27f8584f2666cce898\": rpc error: code = NotFound desc = could not find container \"e489e2fdc7f6fe4d2480f86a5057e4d7c62dfdd4cdc16d27f8584f2666cce898\": container with ID starting with e489e2fdc7f6fe4d2480f86a5057e4d7c62dfdd4cdc16d27f8584f2666cce898 not found: ID does not exist" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.819054 4981 scope.go:117] "RemoveContainer" containerID="0eb2f8487813cdc4be8dcc0c72bd0a28c29458919ceae352be56812f669c34b8" Jan 28 16:08:51 crc kubenswrapper[4981]: E0128 16:08:51.820351 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eb2f8487813cdc4be8dcc0c72bd0a28c29458919ceae352be56812f669c34b8\": container with ID starting with 0eb2f8487813cdc4be8dcc0c72bd0a28c29458919ceae352be56812f669c34b8 not found: ID does not exist" containerID="0eb2f8487813cdc4be8dcc0c72bd0a28c29458919ceae352be56812f669c34b8" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.820401 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eb2f8487813cdc4be8dcc0c72bd0a28c29458919ceae352be56812f669c34b8"} err="failed to get container status \"0eb2f8487813cdc4be8dcc0c72bd0a28c29458919ceae352be56812f669c34b8\": rpc error: code = NotFound desc = could not find container \"0eb2f8487813cdc4be8dcc0c72bd0a28c29458919ceae352be56812f669c34b8\": container with ID starting with 0eb2f8487813cdc4be8dcc0c72bd0a28c29458919ceae352be56812f669c34b8 not found: ID does not exist" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.820431 4981 scope.go:117] "RemoveContainer" containerID="32bcb3f41bfdf8588f6d15c1a9ec08751db66d1ce27cb61d76b6247f7ec727af" Jan 28 16:08:51 crc kubenswrapper[4981]: E0128 16:08:51.820721 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32bcb3f41bfdf8588f6d15c1a9ec08751db66d1ce27cb61d76b6247f7ec727af\": container with ID starting with 32bcb3f41bfdf8588f6d15c1a9ec08751db66d1ce27cb61d76b6247f7ec727af not found: ID does not exist" containerID="32bcb3f41bfdf8588f6d15c1a9ec08751db66d1ce27cb61d76b6247f7ec727af" Jan 28 16:08:51 crc kubenswrapper[4981]: I0128 16:08:51.820749 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32bcb3f41bfdf8588f6d15c1a9ec08751db66d1ce27cb61d76b6247f7ec727af"} err="failed to get container status \"32bcb3f41bfdf8588f6d15c1a9ec08751db66d1ce27cb61d76b6247f7ec727af\": rpc error: code = NotFound desc = could not find container \"32bcb3f41bfdf8588f6d15c1a9ec08751db66d1ce27cb61d76b6247f7ec727af\": container with ID starting with 32bcb3f41bfdf8588f6d15c1a9ec08751db66d1ce27cb61d76b6247f7ec727af not found: ID does not exist" Jan 28 16:08:53 crc kubenswrapper[4981]: I0128 16:08:53.319240 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:08:53 crc kubenswrapper[4981]: E0128 16:08:53.319890 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:08:53 crc kubenswrapper[4981]: I0128 16:08:53.334925 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b665308a-721b-4196-b9d9-82bde4e903ae" path="/var/lib/kubelet/pods/b665308a-721b-4196-b9d9-82bde4e903ae/volumes" Jan 28 16:09:05 crc kubenswrapper[4981]: I0128 16:09:05.320183 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:09:05 crc kubenswrapper[4981]: E0128 16:09:05.325040 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:09:16 crc kubenswrapper[4981]: I0128 16:09:16.320387 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:09:16 crc kubenswrapper[4981]: E0128 16:09:16.321529 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:09:30 crc kubenswrapper[4981]: I0128 16:09:30.318931 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:09:30 crc kubenswrapper[4981]: E0128 16:09:30.319992 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:09:40 crc kubenswrapper[4981]: I0128 16:09:40.160011 4981 generic.go:334] "Generic (PLEG): container finished" podID="442612b4-927a-47bb-b48a-29a6ab80e0bb" containerID="bf210db8c873ad8c599232d1fd654bbd2b50855834163fcf082ff1b75ce942aa" exitCode=0 Jan 28 16:09:40 crc kubenswrapper[4981]: I0128 16:09:40.160098 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dk42d/must-gather-m4plm" event={"ID":"442612b4-927a-47bb-b48a-29a6ab80e0bb","Type":"ContainerDied","Data":"bf210db8c873ad8c599232d1fd654bbd2b50855834163fcf082ff1b75ce942aa"} Jan 28 16:09:40 crc kubenswrapper[4981]: I0128 16:09:40.161345 4981 scope.go:117] "RemoveContainer" containerID="bf210db8c873ad8c599232d1fd654bbd2b50855834163fcf082ff1b75ce942aa" Jan 28 16:09:40 crc kubenswrapper[4981]: I0128 16:09:40.340755 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dk42d_must-gather-m4plm_442612b4-927a-47bb-b48a-29a6ab80e0bb/gather/0.log" Jan 28 16:09:41 crc kubenswrapper[4981]: I0128 16:09:41.318964 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:09:41 crc kubenswrapper[4981]: E0128 16:09:41.319585 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:09:47 crc kubenswrapper[4981]: I0128 16:09:47.660519 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dk42d/must-gather-m4plm"] Jan 28 16:09:47 crc kubenswrapper[4981]: I0128 16:09:47.661264 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-dk42d/must-gather-m4plm" podUID="442612b4-927a-47bb-b48a-29a6ab80e0bb" containerName="copy" containerID="cri-o://41af3999fd65bddee1fb67239f7575be1d415e3aab47ae8ebbeda4795b2fea03" gracePeriod=2 Jan 28 16:09:47 crc kubenswrapper[4981]: I0128 16:09:47.670978 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dk42d/must-gather-m4plm"] Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.183424 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dk42d_must-gather-m4plm_442612b4-927a-47bb-b48a-29a6ab80e0bb/copy/0.log" Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.184314 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/must-gather-m4plm" Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.234632 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dk42d_must-gather-m4plm_442612b4-927a-47bb-b48a-29a6ab80e0bb/copy/0.log" Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.235147 4981 generic.go:334] "Generic (PLEG): container finished" podID="442612b4-927a-47bb-b48a-29a6ab80e0bb" containerID="41af3999fd65bddee1fb67239f7575be1d415e3aab47ae8ebbeda4795b2fea03" exitCode=143 Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.235225 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dk42d/must-gather-m4plm" Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.235237 4981 scope.go:117] "RemoveContainer" containerID="41af3999fd65bddee1fb67239f7575be1d415e3aab47ae8ebbeda4795b2fea03" Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.253377 4981 scope.go:117] "RemoveContainer" containerID="bf210db8c873ad8c599232d1fd654bbd2b50855834163fcf082ff1b75ce942aa" Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.315015 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/442612b4-927a-47bb-b48a-29a6ab80e0bb-must-gather-output\") pod \"442612b4-927a-47bb-b48a-29a6ab80e0bb\" (UID: \"442612b4-927a-47bb-b48a-29a6ab80e0bb\") " Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.315104 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n9gk\" (UniqueName: \"kubernetes.io/projected/442612b4-927a-47bb-b48a-29a6ab80e0bb-kube-api-access-4n9gk\") pod \"442612b4-927a-47bb-b48a-29a6ab80e0bb\" (UID: \"442612b4-927a-47bb-b48a-29a6ab80e0bb\") " Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.321818 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/442612b4-927a-47bb-b48a-29a6ab80e0bb-kube-api-access-4n9gk" (OuterVolumeSpecName: "kube-api-access-4n9gk") pod "442612b4-927a-47bb-b48a-29a6ab80e0bb" (UID: "442612b4-927a-47bb-b48a-29a6ab80e0bb"). InnerVolumeSpecName "kube-api-access-4n9gk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.336742 4981 scope.go:117] "RemoveContainer" containerID="41af3999fd65bddee1fb67239f7575be1d415e3aab47ae8ebbeda4795b2fea03" Jan 28 16:09:48 crc kubenswrapper[4981]: E0128 16:09:48.337452 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41af3999fd65bddee1fb67239f7575be1d415e3aab47ae8ebbeda4795b2fea03\": container with ID starting with 41af3999fd65bddee1fb67239f7575be1d415e3aab47ae8ebbeda4795b2fea03 not found: ID does not exist" containerID="41af3999fd65bddee1fb67239f7575be1d415e3aab47ae8ebbeda4795b2fea03" Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.337523 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41af3999fd65bddee1fb67239f7575be1d415e3aab47ae8ebbeda4795b2fea03"} err="failed to get container status \"41af3999fd65bddee1fb67239f7575be1d415e3aab47ae8ebbeda4795b2fea03\": rpc error: code = NotFound desc = could not find container \"41af3999fd65bddee1fb67239f7575be1d415e3aab47ae8ebbeda4795b2fea03\": container with ID starting with 41af3999fd65bddee1fb67239f7575be1d415e3aab47ae8ebbeda4795b2fea03 not found: ID does not exist" Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.337553 4981 scope.go:117] "RemoveContainer" containerID="bf210db8c873ad8c599232d1fd654bbd2b50855834163fcf082ff1b75ce942aa" Jan 28 16:09:48 crc kubenswrapper[4981]: E0128 16:09:48.339963 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf210db8c873ad8c599232d1fd654bbd2b50855834163fcf082ff1b75ce942aa\": container with ID starting with bf210db8c873ad8c599232d1fd654bbd2b50855834163fcf082ff1b75ce942aa not found: ID does not exist" containerID="bf210db8c873ad8c599232d1fd654bbd2b50855834163fcf082ff1b75ce942aa" Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.340019 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf210db8c873ad8c599232d1fd654bbd2b50855834163fcf082ff1b75ce942aa"} err="failed to get container status \"bf210db8c873ad8c599232d1fd654bbd2b50855834163fcf082ff1b75ce942aa\": rpc error: code = NotFound desc = could not find container \"bf210db8c873ad8c599232d1fd654bbd2b50855834163fcf082ff1b75ce942aa\": container with ID starting with bf210db8c873ad8c599232d1fd654bbd2b50855834163fcf082ff1b75ce942aa not found: ID does not exist" Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.417361 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n9gk\" (UniqueName: \"kubernetes.io/projected/442612b4-927a-47bb-b48a-29a6ab80e0bb-kube-api-access-4n9gk\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.471831 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/442612b4-927a-47bb-b48a-29a6ab80e0bb-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "442612b4-927a-47bb-b48a-29a6ab80e0bb" (UID: "442612b4-927a-47bb-b48a-29a6ab80e0bb"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:09:48 crc kubenswrapper[4981]: I0128 16:09:48.519625 4981 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/442612b4-927a-47bb-b48a-29a6ab80e0bb-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 16:09:49 crc kubenswrapper[4981]: I0128 16:09:49.327684 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="442612b4-927a-47bb-b48a-29a6ab80e0bb" path="/var/lib/kubelet/pods/442612b4-927a-47bb-b48a-29a6ab80e0bb/volumes" Jan 28 16:09:52 crc kubenswrapper[4981]: I0128 16:09:52.318801 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:09:52 crc kubenswrapper[4981]: E0128 16:09:52.319361 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:10:03 crc kubenswrapper[4981]: I0128 16:10:03.319677 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:10:03 crc kubenswrapper[4981]: E0128 16:10:03.320488 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:10:15 crc kubenswrapper[4981]: I0128 16:10:15.318552 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:10:15 crc kubenswrapper[4981]: E0128 16:10:15.319295 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:10:26 crc kubenswrapper[4981]: I0128 16:10:26.319144 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:10:26 crc kubenswrapper[4981]: E0128 16:10:26.319827 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:10:39 crc kubenswrapper[4981]: I0128 16:10:39.327433 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:10:39 crc kubenswrapper[4981]: E0128 16:10:39.328416 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:10:52 crc kubenswrapper[4981]: I0128 16:10:52.318655 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:10:52 crc kubenswrapper[4981]: E0128 16:10:52.319482 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:11:05 crc kubenswrapper[4981]: I0128 16:11:05.318782 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:11:05 crc kubenswrapper[4981]: E0128 16:11:05.319547 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:11:17 crc kubenswrapper[4981]: I0128 16:11:17.319337 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:11:17 crc kubenswrapper[4981]: E0128 16:11:17.320071 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:11:31 crc kubenswrapper[4981]: I0128 16:11:31.319612 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:11:31 crc kubenswrapper[4981]: E0128 16:11:31.320451 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:11:45 crc kubenswrapper[4981]: I0128 16:11:45.319307 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:11:45 crc kubenswrapper[4981]: E0128 16:11:45.320052 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:11:58 crc kubenswrapper[4981]: I0128 16:11:58.319324 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:11:58 crc kubenswrapper[4981]: E0128 16:11:58.320106 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:12:10 crc kubenswrapper[4981]: I0128 16:12:10.319315 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:12:10 crc kubenswrapper[4981]: E0128 16:12:10.320127 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:12:23 crc kubenswrapper[4981]: I0128 16:12:23.318861 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:12:24 crc kubenswrapper[4981]: I0128 16:12:24.028621 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"4ab0a2849dc2e00c4edd50099e12dfbd084b3aa7c1423fcb7eb0555a7c8c82d4"} Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.078268 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tzkv5/must-gather-dn99r"] Jan 28 16:12:50 crc kubenswrapper[4981]: E0128 16:12:50.079166 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442612b4-927a-47bb-b48a-29a6ab80e0bb" containerName="gather" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.079181 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="442612b4-927a-47bb-b48a-29a6ab80e0bb" containerName="gather" Jan 28 16:12:50 crc kubenswrapper[4981]: E0128 16:12:50.079221 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b665308a-721b-4196-b9d9-82bde4e903ae" containerName="extract-utilities" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.079228 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b665308a-721b-4196-b9d9-82bde4e903ae" containerName="extract-utilities" Jan 28 16:12:50 crc kubenswrapper[4981]: E0128 16:12:50.079243 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442612b4-927a-47bb-b48a-29a6ab80e0bb" containerName="copy" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.079249 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="442612b4-927a-47bb-b48a-29a6ab80e0bb" containerName="copy" Jan 28 16:12:50 crc kubenswrapper[4981]: E0128 16:12:50.079260 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b665308a-721b-4196-b9d9-82bde4e903ae" containerName="extract-content" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.079266 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b665308a-721b-4196-b9d9-82bde4e903ae" containerName="extract-content" Jan 28 16:12:50 crc kubenswrapper[4981]: E0128 16:12:50.079274 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b665308a-721b-4196-b9d9-82bde4e903ae" containerName="registry-server" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.079280 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b665308a-721b-4196-b9d9-82bde4e903ae" containerName="registry-server" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.079451 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b665308a-721b-4196-b9d9-82bde4e903ae" containerName="registry-server" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.079482 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="442612b4-927a-47bb-b48a-29a6ab80e0bb" containerName="copy" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.079496 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="442612b4-927a-47bb-b48a-29a6ab80e0bb" containerName="gather" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.080585 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/must-gather-dn99r" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.084049 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-tzkv5"/"default-dockercfg-vsmfr" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.084295 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tzkv5"/"openshift-service-ca.crt" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.084752 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-tzkv5"/"kube-root-ca.crt" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.105859 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tzkv5/must-gather-dn99r"] Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.259743 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lmkq\" (UniqueName: \"kubernetes.io/projected/7179bcf6-341a-49a0-b2f8-60e6f5833dfd-kube-api-access-2lmkq\") pod \"must-gather-dn99r\" (UID: \"7179bcf6-341a-49a0-b2f8-60e6f5833dfd\") " pod="openshift-must-gather-tzkv5/must-gather-dn99r" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.259881 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7179bcf6-341a-49a0-b2f8-60e6f5833dfd-must-gather-output\") pod \"must-gather-dn99r\" (UID: \"7179bcf6-341a-49a0-b2f8-60e6f5833dfd\") " pod="openshift-must-gather-tzkv5/must-gather-dn99r" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.865548 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lmkq\" (UniqueName: \"kubernetes.io/projected/7179bcf6-341a-49a0-b2f8-60e6f5833dfd-kube-api-access-2lmkq\") pod \"must-gather-dn99r\" (UID: \"7179bcf6-341a-49a0-b2f8-60e6f5833dfd\") " pod="openshift-must-gather-tzkv5/must-gather-dn99r" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.865714 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7179bcf6-341a-49a0-b2f8-60e6f5833dfd-must-gather-output\") pod \"must-gather-dn99r\" (UID: \"7179bcf6-341a-49a0-b2f8-60e6f5833dfd\") " pod="openshift-must-gather-tzkv5/must-gather-dn99r" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.866286 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7179bcf6-341a-49a0-b2f8-60e6f5833dfd-must-gather-output\") pod \"must-gather-dn99r\" (UID: \"7179bcf6-341a-49a0-b2f8-60e6f5833dfd\") " pod="openshift-must-gather-tzkv5/must-gather-dn99r" Jan 28 16:12:50 crc kubenswrapper[4981]: I0128 16:12:50.907142 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lmkq\" (UniqueName: \"kubernetes.io/projected/7179bcf6-341a-49a0-b2f8-60e6f5833dfd-kube-api-access-2lmkq\") pod \"must-gather-dn99r\" (UID: \"7179bcf6-341a-49a0-b2f8-60e6f5833dfd\") " pod="openshift-must-gather-tzkv5/must-gather-dn99r" Jan 28 16:12:51 crc kubenswrapper[4981]: I0128 16:12:51.009818 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/must-gather-dn99r" Jan 28 16:12:51 crc kubenswrapper[4981]: I0128 16:12:51.593914 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-tzkv5/must-gather-dn99r"] Jan 28 16:12:52 crc kubenswrapper[4981]: I0128 16:12:52.296459 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzkv5/must-gather-dn99r" event={"ID":"7179bcf6-341a-49a0-b2f8-60e6f5833dfd","Type":"ContainerStarted","Data":"7c9ca56d45f40189a00095dfb5767b54644eb55d6a7e46b713ff5bc67a49b1d7"} Jan 28 16:12:52 crc kubenswrapper[4981]: I0128 16:12:52.296733 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzkv5/must-gather-dn99r" event={"ID":"7179bcf6-341a-49a0-b2f8-60e6f5833dfd","Type":"ContainerStarted","Data":"847d1a516e064a00c2f2cfaddf7caf936d0686b794734b748cedf4bbad8a638a"} Jan 28 16:12:52 crc kubenswrapper[4981]: I0128 16:12:52.296744 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzkv5/must-gather-dn99r" event={"ID":"7179bcf6-341a-49a0-b2f8-60e6f5833dfd","Type":"ContainerStarted","Data":"acb6e0c74dac061b3244132fc60a8d196fc80a4a7434c01c6fd7e534e0039f50"} Jan 28 16:12:52 crc kubenswrapper[4981]: I0128 16:12:52.320081 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tzkv5/must-gather-dn99r" podStartSLOduration=2.320065884 podStartE2EDuration="2.320065884s" podCreationTimestamp="2026-01-28 16:12:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:12:52.312423062 +0000 UTC m=+4183.764581303" watchObservedRunningTime="2026-01-28 16:12:52.320065884 +0000 UTC m=+4183.772224125" Jan 28 16:12:54 crc kubenswrapper[4981]: E0128 16:12:54.332908 4981 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.151:38260->38.102.83.151:33457: write tcp 38.102.83.151:38260->38.102.83.151:33457: write: broken pipe Jan 28 16:12:55 crc kubenswrapper[4981]: I0128 16:12:55.226074 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tzkv5/crc-debug-lrq96"] Jan 28 16:12:55 crc kubenswrapper[4981]: I0128 16:12:55.228918 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/crc-debug-lrq96" Jan 28 16:12:55 crc kubenswrapper[4981]: I0128 16:12:55.263134 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3-host\") pod \"crc-debug-lrq96\" (UID: \"0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3\") " pod="openshift-must-gather-tzkv5/crc-debug-lrq96" Jan 28 16:12:55 crc kubenswrapper[4981]: I0128 16:12:55.263243 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9tsg\" (UniqueName: \"kubernetes.io/projected/0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3-kube-api-access-g9tsg\") pod \"crc-debug-lrq96\" (UID: \"0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3\") " pod="openshift-must-gather-tzkv5/crc-debug-lrq96" Jan 28 16:12:55 crc kubenswrapper[4981]: I0128 16:12:55.365640 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3-host\") pod \"crc-debug-lrq96\" (UID: \"0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3\") " pod="openshift-must-gather-tzkv5/crc-debug-lrq96" Jan 28 16:12:55 crc kubenswrapper[4981]: I0128 16:12:55.365742 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9tsg\" (UniqueName: \"kubernetes.io/projected/0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3-kube-api-access-g9tsg\") pod \"crc-debug-lrq96\" (UID: \"0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3\") " pod="openshift-must-gather-tzkv5/crc-debug-lrq96" Jan 28 16:12:55 crc kubenswrapper[4981]: I0128 16:12:55.365852 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3-host\") pod \"crc-debug-lrq96\" (UID: \"0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3\") " pod="openshift-must-gather-tzkv5/crc-debug-lrq96" Jan 28 16:12:55 crc kubenswrapper[4981]: I0128 16:12:55.398966 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9tsg\" (UniqueName: \"kubernetes.io/projected/0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3-kube-api-access-g9tsg\") pod \"crc-debug-lrq96\" (UID: \"0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3\") " pod="openshift-must-gather-tzkv5/crc-debug-lrq96" Jan 28 16:12:55 crc kubenswrapper[4981]: I0128 16:12:55.556996 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/crc-debug-lrq96" Jan 28 16:12:56 crc kubenswrapper[4981]: I0128 16:12:56.330738 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzkv5/crc-debug-lrq96" event={"ID":"0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3","Type":"ContainerStarted","Data":"d8dc7ac68ad76f19a19d90f9f548ffa62e17f6f197858f08277dc285ddf081d2"} Jan 28 16:12:56 crc kubenswrapper[4981]: I0128 16:12:56.331231 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzkv5/crc-debug-lrq96" event={"ID":"0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3","Type":"ContainerStarted","Data":"0848f60e03e6f2700ac03b283ab9b2f3fd287826a26912100eb95596b52213f2"} Jan 28 16:12:56 crc kubenswrapper[4981]: I0128 16:12:56.349862 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-tzkv5/crc-debug-lrq96" podStartSLOduration=1.349842326 podStartE2EDuration="1.349842326s" podCreationTimestamp="2026-01-28 16:12:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:12:56.342915153 +0000 UTC m=+4187.795073394" watchObservedRunningTime="2026-01-28 16:12:56.349842326 +0000 UTC m=+4187.802000567" Jan 28 16:13:39 crc kubenswrapper[4981]: I0128 16:13:39.731751 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gj6n5"] Jan 28 16:13:39 crc kubenswrapper[4981]: I0128 16:13:39.735057 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:39 crc kubenswrapper[4981]: I0128 16:13:39.747867 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gj6n5"] Jan 28 16:13:39 crc kubenswrapper[4981]: I0128 16:13:39.844880 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-catalog-content\") pod \"community-operators-gj6n5\" (UID: \"b5c54246-f24f-4867-a2d4-dfcd7bb3f082\") " pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:39 crc kubenswrapper[4981]: I0128 16:13:39.844958 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzflg\" (UniqueName: \"kubernetes.io/projected/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-kube-api-access-vzflg\") pod \"community-operators-gj6n5\" (UID: \"b5c54246-f24f-4867-a2d4-dfcd7bb3f082\") " pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:39 crc kubenswrapper[4981]: I0128 16:13:39.845086 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-utilities\") pod \"community-operators-gj6n5\" (UID: \"b5c54246-f24f-4867-a2d4-dfcd7bb3f082\") " pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:39 crc kubenswrapper[4981]: I0128 16:13:39.946758 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-catalog-content\") pod \"community-operators-gj6n5\" (UID: \"b5c54246-f24f-4867-a2d4-dfcd7bb3f082\") " pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:39 crc kubenswrapper[4981]: I0128 16:13:39.946831 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzflg\" (UniqueName: \"kubernetes.io/projected/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-kube-api-access-vzflg\") pod \"community-operators-gj6n5\" (UID: \"b5c54246-f24f-4867-a2d4-dfcd7bb3f082\") " pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:39 crc kubenswrapper[4981]: I0128 16:13:39.946929 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-utilities\") pod \"community-operators-gj6n5\" (UID: \"b5c54246-f24f-4867-a2d4-dfcd7bb3f082\") " pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:39 crc kubenswrapper[4981]: I0128 16:13:39.947482 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-catalog-content\") pod \"community-operators-gj6n5\" (UID: \"b5c54246-f24f-4867-a2d4-dfcd7bb3f082\") " pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:39 crc kubenswrapper[4981]: I0128 16:13:39.947525 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-utilities\") pod \"community-operators-gj6n5\" (UID: \"b5c54246-f24f-4867-a2d4-dfcd7bb3f082\") " pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:39 crc kubenswrapper[4981]: I0128 16:13:39.965568 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzflg\" (UniqueName: \"kubernetes.io/projected/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-kube-api-access-vzflg\") pod \"community-operators-gj6n5\" (UID: \"b5c54246-f24f-4867-a2d4-dfcd7bb3f082\") " pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:40 crc kubenswrapper[4981]: I0128 16:13:40.055967 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:40 crc kubenswrapper[4981]: I0128 16:13:40.446058 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gj6n5"] Jan 28 16:13:40 crc kubenswrapper[4981]: I0128 16:13:40.741846 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gj6n5" event={"ID":"b5c54246-f24f-4867-a2d4-dfcd7bb3f082","Type":"ContainerStarted","Data":"8511c01deaaf940f065a3cf7b00c003e4d667a1ce96fb32eaf3f7e3898dfbc7e"} Jan 28 16:13:41 crc kubenswrapper[4981]: I0128 16:13:41.752021 4981 generic.go:334] "Generic (PLEG): container finished" podID="b5c54246-f24f-4867-a2d4-dfcd7bb3f082" containerID="b8e19c2516eaff0aa51e059b9501a8b1c9adf4ab5870f207a7595393b0da29bf" exitCode=0 Jan 28 16:13:41 crc kubenswrapper[4981]: I0128 16:13:41.752225 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gj6n5" event={"ID":"b5c54246-f24f-4867-a2d4-dfcd7bb3f082","Type":"ContainerDied","Data":"b8e19c2516eaff0aa51e059b9501a8b1c9adf4ab5870f207a7595393b0da29bf"} Jan 28 16:13:41 crc kubenswrapper[4981]: I0128 16:13:41.754970 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:13:43 crc kubenswrapper[4981]: I0128 16:13:43.774994 4981 generic.go:334] "Generic (PLEG): container finished" podID="0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3" containerID="d8dc7ac68ad76f19a19d90f9f548ffa62e17f6f197858f08277dc285ddf081d2" exitCode=0 Jan 28 16:13:43 crc kubenswrapper[4981]: I0128 16:13:43.775203 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzkv5/crc-debug-lrq96" event={"ID":"0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3","Type":"ContainerDied","Data":"d8dc7ac68ad76f19a19d90f9f548ffa62e17f6f197858f08277dc285ddf081d2"} Jan 28 16:13:43 crc kubenswrapper[4981]: I0128 16:13:43.778022 4981 generic.go:334] "Generic (PLEG): container finished" podID="b5c54246-f24f-4867-a2d4-dfcd7bb3f082" containerID="83b5f9000991702ac025512e66ff21f80db2bd2caf24b651cb6642ac8f35e2b8" exitCode=0 Jan 28 16:13:43 crc kubenswrapper[4981]: I0128 16:13:43.778055 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gj6n5" event={"ID":"b5c54246-f24f-4867-a2d4-dfcd7bb3f082","Type":"ContainerDied","Data":"83b5f9000991702ac025512e66ff21f80db2bd2caf24b651cb6642ac8f35e2b8"} Jan 28 16:13:44 crc kubenswrapper[4981]: I0128 16:13:44.893057 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/crc-debug-lrq96" Jan 28 16:13:44 crc kubenswrapper[4981]: I0128 16:13:44.947525 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tzkv5/crc-debug-lrq96"] Jan 28 16:13:44 crc kubenswrapper[4981]: I0128 16:13:44.957229 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tzkv5/crc-debug-lrq96"] Jan 28 16:13:45 crc kubenswrapper[4981]: I0128 16:13:45.060591 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9tsg\" (UniqueName: \"kubernetes.io/projected/0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3-kube-api-access-g9tsg\") pod \"0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3\" (UID: \"0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3\") " Jan 28 16:13:45 crc kubenswrapper[4981]: I0128 16:13:45.061026 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3-host\") pod \"0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3\" (UID: \"0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3\") " Jan 28 16:13:45 crc kubenswrapper[4981]: I0128 16:13:45.061164 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3-host" (OuterVolumeSpecName: "host") pod "0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3" (UID: "0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:13:45 crc kubenswrapper[4981]: I0128 16:13:45.061822 4981 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3-host\") on node \"crc\" DevicePath \"\"" Jan 28 16:13:45 crc kubenswrapper[4981]: I0128 16:13:45.066352 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3-kube-api-access-g9tsg" (OuterVolumeSpecName: "kube-api-access-g9tsg") pod "0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3" (UID: "0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3"). InnerVolumeSpecName "kube-api-access-g9tsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:13:45 crc kubenswrapper[4981]: I0128 16:13:45.165799 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9tsg\" (UniqueName: \"kubernetes.io/projected/0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3-kube-api-access-g9tsg\") on node \"crc\" DevicePath \"\"" Jan 28 16:13:45 crc kubenswrapper[4981]: I0128 16:13:45.334261 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3" path="/var/lib/kubelet/pods/0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3/volumes" Jan 28 16:13:45 crc kubenswrapper[4981]: I0128 16:13:45.800611 4981 scope.go:117] "RemoveContainer" containerID="d8dc7ac68ad76f19a19d90f9f548ffa62e17f6f197858f08277dc285ddf081d2" Jan 28 16:13:45 crc kubenswrapper[4981]: I0128 16:13:45.800634 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/crc-debug-lrq96" Jan 28 16:13:45 crc kubenswrapper[4981]: I0128 16:13:45.803861 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gj6n5" event={"ID":"b5c54246-f24f-4867-a2d4-dfcd7bb3f082","Type":"ContainerStarted","Data":"0ea2aaa049e01139e7fb3b71860accfddb8fb1c9e6d00f8fb44e8d66367cdec8"} Jan 28 16:13:45 crc kubenswrapper[4981]: I0128 16:13:45.840249 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gj6n5" podStartSLOduration=3.262838084 podStartE2EDuration="6.840230206s" podCreationTimestamp="2026-01-28 16:13:39 +0000 UTC" firstStartedPulling="2026-01-28 16:13:41.754740299 +0000 UTC m=+4233.206898540" lastFinishedPulling="2026-01-28 16:13:45.332132421 +0000 UTC m=+4236.784290662" observedRunningTime="2026-01-28 16:13:45.839451665 +0000 UTC m=+4237.291609926" watchObservedRunningTime="2026-01-28 16:13:45.840230206 +0000 UTC m=+4237.292388447" Jan 28 16:13:46 crc kubenswrapper[4981]: I0128 16:13:46.139468 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tzkv5/crc-debug-fq7wj"] Jan 28 16:13:46 crc kubenswrapper[4981]: E0128 16:13:46.139863 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3" containerName="container-00" Jan 28 16:13:46 crc kubenswrapper[4981]: I0128 16:13:46.139875 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3" containerName="container-00" Jan 28 16:13:46 crc kubenswrapper[4981]: I0128 16:13:46.140059 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="0430bc9a-57d1-4314-bdfc-8dc9d5ca36b3" containerName="container-00" Jan 28 16:13:46 crc kubenswrapper[4981]: I0128 16:13:46.140800 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/crc-debug-fq7wj" Jan 28 16:13:46 crc kubenswrapper[4981]: I0128 16:13:46.288421 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpxjd\" (UniqueName: \"kubernetes.io/projected/022d0ed3-b2fa-4c5e-a262-8f339d16cebe-kube-api-access-rpxjd\") pod \"crc-debug-fq7wj\" (UID: \"022d0ed3-b2fa-4c5e-a262-8f339d16cebe\") " pod="openshift-must-gather-tzkv5/crc-debug-fq7wj" Jan 28 16:13:46 crc kubenswrapper[4981]: I0128 16:13:46.288731 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/022d0ed3-b2fa-4c5e-a262-8f339d16cebe-host\") pod \"crc-debug-fq7wj\" (UID: \"022d0ed3-b2fa-4c5e-a262-8f339d16cebe\") " pod="openshift-must-gather-tzkv5/crc-debug-fq7wj" Jan 28 16:13:46 crc kubenswrapper[4981]: I0128 16:13:46.391582 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpxjd\" (UniqueName: \"kubernetes.io/projected/022d0ed3-b2fa-4c5e-a262-8f339d16cebe-kube-api-access-rpxjd\") pod \"crc-debug-fq7wj\" (UID: \"022d0ed3-b2fa-4c5e-a262-8f339d16cebe\") " pod="openshift-must-gather-tzkv5/crc-debug-fq7wj" Jan 28 16:13:46 crc kubenswrapper[4981]: I0128 16:13:46.391685 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/022d0ed3-b2fa-4c5e-a262-8f339d16cebe-host\") pod \"crc-debug-fq7wj\" (UID: \"022d0ed3-b2fa-4c5e-a262-8f339d16cebe\") " pod="openshift-must-gather-tzkv5/crc-debug-fq7wj" Jan 28 16:13:46 crc kubenswrapper[4981]: I0128 16:13:46.391854 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/022d0ed3-b2fa-4c5e-a262-8f339d16cebe-host\") pod \"crc-debug-fq7wj\" (UID: \"022d0ed3-b2fa-4c5e-a262-8f339d16cebe\") " pod="openshift-must-gather-tzkv5/crc-debug-fq7wj" Jan 28 16:13:46 crc kubenswrapper[4981]: I0128 16:13:46.429265 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpxjd\" (UniqueName: \"kubernetes.io/projected/022d0ed3-b2fa-4c5e-a262-8f339d16cebe-kube-api-access-rpxjd\") pod \"crc-debug-fq7wj\" (UID: \"022d0ed3-b2fa-4c5e-a262-8f339d16cebe\") " pod="openshift-must-gather-tzkv5/crc-debug-fq7wj" Jan 28 16:13:46 crc kubenswrapper[4981]: I0128 16:13:46.458311 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/crc-debug-fq7wj" Jan 28 16:13:46 crc kubenswrapper[4981]: W0128 16:13:46.505605 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod022d0ed3_b2fa_4c5e_a262_8f339d16cebe.slice/crio-83bb697e78c8c9a4a94f1b9dd73e9d27684fd0abe42f1981f7777fe9e242ecfe WatchSource:0}: Error finding container 83bb697e78c8c9a4a94f1b9dd73e9d27684fd0abe42f1981f7777fe9e242ecfe: Status 404 returned error can't find the container with id 83bb697e78c8c9a4a94f1b9dd73e9d27684fd0abe42f1981f7777fe9e242ecfe Jan 28 16:13:46 crc kubenswrapper[4981]: I0128 16:13:46.813924 4981 generic.go:334] "Generic (PLEG): container finished" podID="022d0ed3-b2fa-4c5e-a262-8f339d16cebe" containerID="51bb8ad1b6fc62ff596642d666ab0860b7d10be522bf769b7c95e00610201d82" exitCode=0 Jan 28 16:13:46 crc kubenswrapper[4981]: I0128 16:13:46.813999 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzkv5/crc-debug-fq7wj" event={"ID":"022d0ed3-b2fa-4c5e-a262-8f339d16cebe","Type":"ContainerDied","Data":"51bb8ad1b6fc62ff596642d666ab0860b7d10be522bf769b7c95e00610201d82"} Jan 28 16:13:46 crc kubenswrapper[4981]: I0128 16:13:46.814026 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzkv5/crc-debug-fq7wj" event={"ID":"022d0ed3-b2fa-4c5e-a262-8f339d16cebe","Type":"ContainerStarted","Data":"83bb697e78c8c9a4a94f1b9dd73e9d27684fd0abe42f1981f7777fe9e242ecfe"} Jan 28 16:13:47 crc kubenswrapper[4981]: I0128 16:13:47.268706 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tzkv5/crc-debug-fq7wj"] Jan 28 16:13:47 crc kubenswrapper[4981]: I0128 16:13:47.277253 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tzkv5/crc-debug-fq7wj"] Jan 28 16:13:47 crc kubenswrapper[4981]: I0128 16:13:47.932973 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/crc-debug-fq7wj" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.124743 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/022d0ed3-b2fa-4c5e-a262-8f339d16cebe-host\") pod \"022d0ed3-b2fa-4c5e-a262-8f339d16cebe\" (UID: \"022d0ed3-b2fa-4c5e-a262-8f339d16cebe\") " Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.124888 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/022d0ed3-b2fa-4c5e-a262-8f339d16cebe-host" (OuterVolumeSpecName: "host") pod "022d0ed3-b2fa-4c5e-a262-8f339d16cebe" (UID: "022d0ed3-b2fa-4c5e-a262-8f339d16cebe"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.125559 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpxjd\" (UniqueName: \"kubernetes.io/projected/022d0ed3-b2fa-4c5e-a262-8f339d16cebe-kube-api-access-rpxjd\") pod \"022d0ed3-b2fa-4c5e-a262-8f339d16cebe\" (UID: \"022d0ed3-b2fa-4c5e-a262-8f339d16cebe\") " Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.126427 4981 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/022d0ed3-b2fa-4c5e-a262-8f339d16cebe-host\") on node \"crc\" DevicePath \"\"" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.148278 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/022d0ed3-b2fa-4c5e-a262-8f339d16cebe-kube-api-access-rpxjd" (OuterVolumeSpecName: "kube-api-access-rpxjd") pod "022d0ed3-b2fa-4c5e-a262-8f339d16cebe" (UID: "022d0ed3-b2fa-4c5e-a262-8f339d16cebe"). InnerVolumeSpecName "kube-api-access-rpxjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.230981 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpxjd\" (UniqueName: \"kubernetes.io/projected/022d0ed3-b2fa-4c5e-a262-8f339d16cebe-kube-api-access-rpxjd\") on node \"crc\" DevicePath \"\"" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.446597 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-tzkv5/crc-debug-w7jd9"] Jan 28 16:13:48 crc kubenswrapper[4981]: E0128 16:13:48.446975 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="022d0ed3-b2fa-4c5e-a262-8f339d16cebe" containerName="container-00" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.446988 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="022d0ed3-b2fa-4c5e-a262-8f339d16cebe" containerName="container-00" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.447363 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="022d0ed3-b2fa-4c5e-a262-8f339d16cebe" containerName="container-00" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.447961 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/crc-debug-w7jd9" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.641456 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7qvt\" (UniqueName: \"kubernetes.io/projected/ffdf9313-b2e1-46b2-a77e-f4e046638535-kube-api-access-j7qvt\") pod \"crc-debug-w7jd9\" (UID: \"ffdf9313-b2e1-46b2-a77e-f4e046638535\") " pod="openshift-must-gather-tzkv5/crc-debug-w7jd9" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.642159 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ffdf9313-b2e1-46b2-a77e-f4e046638535-host\") pod \"crc-debug-w7jd9\" (UID: \"ffdf9313-b2e1-46b2-a77e-f4e046638535\") " pod="openshift-must-gather-tzkv5/crc-debug-w7jd9" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.744304 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7qvt\" (UniqueName: \"kubernetes.io/projected/ffdf9313-b2e1-46b2-a77e-f4e046638535-kube-api-access-j7qvt\") pod \"crc-debug-w7jd9\" (UID: \"ffdf9313-b2e1-46b2-a77e-f4e046638535\") " pod="openshift-must-gather-tzkv5/crc-debug-w7jd9" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.744491 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ffdf9313-b2e1-46b2-a77e-f4e046638535-host\") pod \"crc-debug-w7jd9\" (UID: \"ffdf9313-b2e1-46b2-a77e-f4e046638535\") " pod="openshift-must-gather-tzkv5/crc-debug-w7jd9" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.744650 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ffdf9313-b2e1-46b2-a77e-f4e046638535-host\") pod \"crc-debug-w7jd9\" (UID: \"ffdf9313-b2e1-46b2-a77e-f4e046638535\") " pod="openshift-must-gather-tzkv5/crc-debug-w7jd9" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.767164 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7qvt\" (UniqueName: \"kubernetes.io/projected/ffdf9313-b2e1-46b2-a77e-f4e046638535-kube-api-access-j7qvt\") pod \"crc-debug-w7jd9\" (UID: \"ffdf9313-b2e1-46b2-a77e-f4e046638535\") " pod="openshift-must-gather-tzkv5/crc-debug-w7jd9" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.838867 4981 scope.go:117] "RemoveContainer" containerID="51bb8ad1b6fc62ff596642d666ab0860b7d10be522bf769b7c95e00610201d82" Jan 28 16:13:48 crc kubenswrapper[4981]: I0128 16:13:48.839015 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/crc-debug-fq7wj" Jan 28 16:13:49 crc kubenswrapper[4981]: I0128 16:13:49.065538 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/crc-debug-w7jd9" Jan 28 16:13:49 crc kubenswrapper[4981]: I0128 16:13:49.334546 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="022d0ed3-b2fa-4c5e-a262-8f339d16cebe" path="/var/lib/kubelet/pods/022d0ed3-b2fa-4c5e-a262-8f339d16cebe/volumes" Jan 28 16:13:49 crc kubenswrapper[4981]: I0128 16:13:49.850410 4981 generic.go:334] "Generic (PLEG): container finished" podID="ffdf9313-b2e1-46b2-a77e-f4e046638535" containerID="bb6c58b40cad0a728b9b77ec7cc68c3fd58ecf5556a360ba37cab7e14b646d14" exitCode=0 Jan 28 16:13:49 crc kubenswrapper[4981]: I0128 16:13:49.850523 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzkv5/crc-debug-w7jd9" event={"ID":"ffdf9313-b2e1-46b2-a77e-f4e046638535","Type":"ContainerDied","Data":"bb6c58b40cad0a728b9b77ec7cc68c3fd58ecf5556a360ba37cab7e14b646d14"} Jan 28 16:13:49 crc kubenswrapper[4981]: I0128 16:13:49.850583 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzkv5/crc-debug-w7jd9" event={"ID":"ffdf9313-b2e1-46b2-a77e-f4e046638535","Type":"ContainerStarted","Data":"ccf65820680fa4a11422940d8d0993bb3aa618812cb91cde00338272fdd2ed9d"} Jan 28 16:13:49 crc kubenswrapper[4981]: I0128 16:13:49.892896 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tzkv5/crc-debug-w7jd9"] Jan 28 16:13:49 crc kubenswrapper[4981]: I0128 16:13:49.902401 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tzkv5/crc-debug-w7jd9"] Jan 28 16:13:50 crc kubenswrapper[4981]: I0128 16:13:50.056209 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:50 crc kubenswrapper[4981]: I0128 16:13:50.056534 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:50 crc kubenswrapper[4981]: I0128 16:13:50.098281 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:50 crc kubenswrapper[4981]: I0128 16:13:50.942623 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:50 crc kubenswrapper[4981]: I0128 16:13:50.978515 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/crc-debug-w7jd9" Jan 28 16:13:50 crc kubenswrapper[4981]: I0128 16:13:50.981542 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ffdf9313-b2e1-46b2-a77e-f4e046638535-host\") pod \"ffdf9313-b2e1-46b2-a77e-f4e046638535\" (UID: \"ffdf9313-b2e1-46b2-a77e-f4e046638535\") " Jan 28 16:13:50 crc kubenswrapper[4981]: I0128 16:13:50.981608 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7qvt\" (UniqueName: \"kubernetes.io/projected/ffdf9313-b2e1-46b2-a77e-f4e046638535-kube-api-access-j7qvt\") pod \"ffdf9313-b2e1-46b2-a77e-f4e046638535\" (UID: \"ffdf9313-b2e1-46b2-a77e-f4e046638535\") " Jan 28 16:13:50 crc kubenswrapper[4981]: I0128 16:13:50.981683 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffdf9313-b2e1-46b2-a77e-f4e046638535-host" (OuterVolumeSpecName: "host") pod "ffdf9313-b2e1-46b2-a77e-f4e046638535" (UID: "ffdf9313-b2e1-46b2-a77e-f4e046638535"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 16:13:50 crc kubenswrapper[4981]: I0128 16:13:50.982232 4981 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ffdf9313-b2e1-46b2-a77e-f4e046638535-host\") on node \"crc\" DevicePath \"\"" Jan 28 16:13:50 crc kubenswrapper[4981]: I0128 16:13:50.990454 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffdf9313-b2e1-46b2-a77e-f4e046638535-kube-api-access-j7qvt" (OuterVolumeSpecName: "kube-api-access-j7qvt") pod "ffdf9313-b2e1-46b2-a77e-f4e046638535" (UID: "ffdf9313-b2e1-46b2-a77e-f4e046638535"). InnerVolumeSpecName "kube-api-access-j7qvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:13:51 crc kubenswrapper[4981]: I0128 16:13:51.002066 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gj6n5"] Jan 28 16:13:51 crc kubenswrapper[4981]: I0128 16:13:51.084143 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7qvt\" (UniqueName: \"kubernetes.io/projected/ffdf9313-b2e1-46b2-a77e-f4e046638535-kube-api-access-j7qvt\") on node \"crc\" DevicePath \"\"" Jan 28 16:13:51 crc kubenswrapper[4981]: I0128 16:13:51.330628 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffdf9313-b2e1-46b2-a77e-f4e046638535" path="/var/lib/kubelet/pods/ffdf9313-b2e1-46b2-a77e-f4e046638535/volumes" Jan 28 16:13:51 crc kubenswrapper[4981]: I0128 16:13:51.878486 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/crc-debug-w7jd9" Jan 28 16:13:51 crc kubenswrapper[4981]: I0128 16:13:51.878531 4981 scope.go:117] "RemoveContainer" containerID="bb6c58b40cad0a728b9b77ec7cc68c3fd58ecf5556a360ba37cab7e14b646d14" Jan 28 16:13:52 crc kubenswrapper[4981]: I0128 16:13:52.887961 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gj6n5" podUID="b5c54246-f24f-4867-a2d4-dfcd7bb3f082" containerName="registry-server" containerID="cri-o://0ea2aaa049e01139e7fb3b71860accfddb8fb1c9e6d00f8fb44e8d66367cdec8" gracePeriod=2 Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.342308 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.541895 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-catalog-content\") pod \"b5c54246-f24f-4867-a2d4-dfcd7bb3f082\" (UID: \"b5c54246-f24f-4867-a2d4-dfcd7bb3f082\") " Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.542030 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-utilities\") pod \"b5c54246-f24f-4867-a2d4-dfcd7bb3f082\" (UID: \"b5c54246-f24f-4867-a2d4-dfcd7bb3f082\") " Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.542231 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzflg\" (UniqueName: \"kubernetes.io/projected/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-kube-api-access-vzflg\") pod \"b5c54246-f24f-4867-a2d4-dfcd7bb3f082\" (UID: \"b5c54246-f24f-4867-a2d4-dfcd7bb3f082\") " Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.542971 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-utilities" (OuterVolumeSpecName: "utilities") pod "b5c54246-f24f-4867-a2d4-dfcd7bb3f082" (UID: "b5c54246-f24f-4867-a2d4-dfcd7bb3f082"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.548865 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-kube-api-access-vzflg" (OuterVolumeSpecName: "kube-api-access-vzflg") pod "b5c54246-f24f-4867-a2d4-dfcd7bb3f082" (UID: "b5c54246-f24f-4867-a2d4-dfcd7bb3f082"). InnerVolumeSpecName "kube-api-access-vzflg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.604166 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5c54246-f24f-4867-a2d4-dfcd7bb3f082" (UID: "b5c54246-f24f-4867-a2d4-dfcd7bb3f082"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.644913 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.644955 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.644968 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzflg\" (UniqueName: \"kubernetes.io/projected/b5c54246-f24f-4867-a2d4-dfcd7bb3f082-kube-api-access-vzflg\") on node \"crc\" DevicePath \"\"" Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.899795 4981 generic.go:334] "Generic (PLEG): container finished" podID="b5c54246-f24f-4867-a2d4-dfcd7bb3f082" containerID="0ea2aaa049e01139e7fb3b71860accfddb8fb1c9e6d00f8fb44e8d66367cdec8" exitCode=0 Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.899865 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gj6n5" Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.899889 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gj6n5" event={"ID":"b5c54246-f24f-4867-a2d4-dfcd7bb3f082","Type":"ContainerDied","Data":"0ea2aaa049e01139e7fb3b71860accfddb8fb1c9e6d00f8fb44e8d66367cdec8"} Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.900386 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gj6n5" event={"ID":"b5c54246-f24f-4867-a2d4-dfcd7bb3f082","Type":"ContainerDied","Data":"8511c01deaaf940f065a3cf7b00c003e4d667a1ce96fb32eaf3f7e3898dfbc7e"} Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.900413 4981 scope.go:117] "RemoveContainer" containerID="0ea2aaa049e01139e7fb3b71860accfddb8fb1c9e6d00f8fb44e8d66367cdec8" Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.929129 4981 scope.go:117] "RemoveContainer" containerID="83b5f9000991702ac025512e66ff21f80db2bd2caf24b651cb6642ac8f35e2b8" Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.935539 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gj6n5"] Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.957967 4981 scope.go:117] "RemoveContainer" containerID="b8e19c2516eaff0aa51e059b9501a8b1c9adf4ab5870f207a7595393b0da29bf" Jan 28 16:13:53 crc kubenswrapper[4981]: I0128 16:13:53.960077 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gj6n5"] Jan 28 16:13:54 crc kubenswrapper[4981]: I0128 16:13:54.008134 4981 scope.go:117] "RemoveContainer" containerID="0ea2aaa049e01139e7fb3b71860accfddb8fb1c9e6d00f8fb44e8d66367cdec8" Jan 28 16:13:54 crc kubenswrapper[4981]: E0128 16:13:54.009266 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ea2aaa049e01139e7fb3b71860accfddb8fb1c9e6d00f8fb44e8d66367cdec8\": container with ID starting with 0ea2aaa049e01139e7fb3b71860accfddb8fb1c9e6d00f8fb44e8d66367cdec8 not found: ID does not exist" containerID="0ea2aaa049e01139e7fb3b71860accfddb8fb1c9e6d00f8fb44e8d66367cdec8" Jan 28 16:13:54 crc kubenswrapper[4981]: I0128 16:13:54.009318 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ea2aaa049e01139e7fb3b71860accfddb8fb1c9e6d00f8fb44e8d66367cdec8"} err="failed to get container status \"0ea2aaa049e01139e7fb3b71860accfddb8fb1c9e6d00f8fb44e8d66367cdec8\": rpc error: code = NotFound desc = could not find container \"0ea2aaa049e01139e7fb3b71860accfddb8fb1c9e6d00f8fb44e8d66367cdec8\": container with ID starting with 0ea2aaa049e01139e7fb3b71860accfddb8fb1c9e6d00f8fb44e8d66367cdec8 not found: ID does not exist" Jan 28 16:13:54 crc kubenswrapper[4981]: I0128 16:13:54.009343 4981 scope.go:117] "RemoveContainer" containerID="83b5f9000991702ac025512e66ff21f80db2bd2caf24b651cb6642ac8f35e2b8" Jan 28 16:13:54 crc kubenswrapper[4981]: E0128 16:13:54.010405 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83b5f9000991702ac025512e66ff21f80db2bd2caf24b651cb6642ac8f35e2b8\": container with ID starting with 83b5f9000991702ac025512e66ff21f80db2bd2caf24b651cb6642ac8f35e2b8 not found: ID does not exist" containerID="83b5f9000991702ac025512e66ff21f80db2bd2caf24b651cb6642ac8f35e2b8" Jan 28 16:13:54 crc kubenswrapper[4981]: I0128 16:13:54.010447 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83b5f9000991702ac025512e66ff21f80db2bd2caf24b651cb6642ac8f35e2b8"} err="failed to get container status \"83b5f9000991702ac025512e66ff21f80db2bd2caf24b651cb6642ac8f35e2b8\": rpc error: code = NotFound desc = could not find container \"83b5f9000991702ac025512e66ff21f80db2bd2caf24b651cb6642ac8f35e2b8\": container with ID starting with 83b5f9000991702ac025512e66ff21f80db2bd2caf24b651cb6642ac8f35e2b8 not found: ID does not exist" Jan 28 16:13:54 crc kubenswrapper[4981]: I0128 16:13:54.010475 4981 scope.go:117] "RemoveContainer" containerID="b8e19c2516eaff0aa51e059b9501a8b1c9adf4ab5870f207a7595393b0da29bf" Jan 28 16:13:54 crc kubenswrapper[4981]: E0128 16:13:54.011404 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8e19c2516eaff0aa51e059b9501a8b1c9adf4ab5870f207a7595393b0da29bf\": container with ID starting with b8e19c2516eaff0aa51e059b9501a8b1c9adf4ab5870f207a7595393b0da29bf not found: ID does not exist" containerID="b8e19c2516eaff0aa51e059b9501a8b1c9adf4ab5870f207a7595393b0da29bf" Jan 28 16:13:54 crc kubenswrapper[4981]: I0128 16:13:54.011430 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8e19c2516eaff0aa51e059b9501a8b1c9adf4ab5870f207a7595393b0da29bf"} err="failed to get container status \"b8e19c2516eaff0aa51e059b9501a8b1c9adf4ab5870f207a7595393b0da29bf\": rpc error: code = NotFound desc = could not find container \"b8e19c2516eaff0aa51e059b9501a8b1c9adf4ab5870f207a7595393b0da29bf\": container with ID starting with b8e19c2516eaff0aa51e059b9501a8b1c9adf4ab5870f207a7595393b0da29bf not found: ID does not exist" Jan 28 16:13:55 crc kubenswrapper[4981]: I0128 16:13:55.332037 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5c54246-f24f-4867-a2d4-dfcd7bb3f082" path="/var/lib/kubelet/pods/b5c54246-f24f-4867-a2d4-dfcd7bb3f082/volumes" Jan 28 16:14:11 crc kubenswrapper[4981]: I0128 16:14:11.902803 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f6d8cf6db-mshgs_46c7dad2-c36e-4c3e-80f0-c6e3ec088723/barbican-api/0.log" Jan 28 16:14:12 crc kubenswrapper[4981]: I0128 16:14:12.137710 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-f6d8cf6db-mshgs_46c7dad2-c36e-4c3e-80f0-c6e3ec088723/barbican-api-log/0.log" Jan 28 16:14:12 crc kubenswrapper[4981]: I0128 16:14:12.168986 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7bd55659d6-qkw24_9001e7fd-73ee-4169-a239-fa6452ac69d2/barbican-keystone-listener-log/0.log" Jan 28 16:14:12 crc kubenswrapper[4981]: I0128 16:14:12.170748 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7bd55659d6-qkw24_9001e7fd-73ee-4169-a239-fa6452ac69d2/barbican-keystone-listener/0.log" Jan 28 16:14:12 crc kubenswrapper[4981]: I0128 16:14:12.342565 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-695f5b56f5-7h6s9_2e2d1563-14e3-41bc-8830-51e28da77c5e/barbican-worker/0.log" Jan 28 16:14:12 crc kubenswrapper[4981]: I0128 16:14:12.412442 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-695f5b56f5-7h6s9_2e2d1563-14e3-41bc-8830-51e28da77c5e/barbican-worker-log/0.log" Jan 28 16:14:12 crc kubenswrapper[4981]: I0128 16:14:12.645368 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-2fd7x_c8005a06-6ceb-4918-867e-216081419a3a/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:12 crc kubenswrapper[4981]: I0128 16:14:12.689919 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bfb88da5-80c7-481b-89ba-2c5c08c258c0/ceilometer-central-agent/0.log" Jan 28 16:14:12 crc kubenswrapper[4981]: I0128 16:14:12.729938 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bfb88da5-80c7-481b-89ba-2c5c08c258c0/ceilometer-notification-agent/0.log" Jan 28 16:14:12 crc kubenswrapper[4981]: I0128 16:14:12.865835 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bfb88da5-80c7-481b-89ba-2c5c08c258c0/proxy-httpd/0.log" Jan 28 16:14:12 crc kubenswrapper[4981]: I0128 16:14:12.879340 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_bfb88da5-80c7-481b-89ba-2c5c08c258c0/sg-core/0.log" Jan 28 16:14:13 crc kubenswrapper[4981]: I0128 16:14:13.020455 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_934e0f8e-1579-4d0e-a34a-53d266c4612a/cinder-api/0.log" Jan 28 16:14:13 crc kubenswrapper[4981]: I0128 16:14:13.085653 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_934e0f8e-1579-4d0e-a34a-53d266c4612a/cinder-api-log/0.log" Jan 28 16:14:13 crc kubenswrapper[4981]: I0128 16:14:13.333597 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8617985e-2166-4325-82d1-6004e7eff07d/probe/0.log" Jan 28 16:14:13 crc kubenswrapper[4981]: I0128 16:14:13.342124 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8617985e-2166-4325-82d1-6004e7eff07d/cinder-scheduler/0.log" Jan 28 16:14:13 crc kubenswrapper[4981]: I0128 16:14:13.369983 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ncl8p_f50a5359-8f8b-47bc-a345-c91ace0f611f/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:13 crc kubenswrapper[4981]: I0128 16:14:13.565934 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-5rnt6_c7607b3a-6cc7-4240-acd3-866b7d39e6be/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:13 crc kubenswrapper[4981]: I0128 16:14:13.578983 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-t7szx_83f911a5-2f1f-4cc2-a2cb-74c94632dd94/init/0.log" Jan 28 16:14:13 crc kubenswrapper[4981]: I0128 16:14:13.877236 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-t7szx_83f911a5-2f1f-4cc2-a2cb-74c94632dd94/dnsmasq-dns/0.log" Jan 28 16:14:13 crc kubenswrapper[4981]: I0128 16:14:13.904486 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-xbxz7_f65e4d3b-6e8a-42cf-8c86-b6f50a9d4628/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:13 crc kubenswrapper[4981]: I0128 16:14:13.904494 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-cb6ffcf87-t7szx_83f911a5-2f1f-4cc2-a2cb-74c94632dd94/init/0.log" Jan 28 16:14:14 crc kubenswrapper[4981]: I0128 16:14:14.073446 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_7888281b-1740-4d52-9752-cac22c11d44e/glance-httpd/0.log" Jan 28 16:14:14 crc kubenswrapper[4981]: I0128 16:14:14.277708 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_7888281b-1740-4d52-9752-cac22c11d44e/glance-log/0.log" Jan 28 16:14:14 crc kubenswrapper[4981]: I0128 16:14:14.431451 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ce43940a-33fa-4da9-a910-a57dc6230e57/glance-httpd/0.log" Jan 28 16:14:14 crc kubenswrapper[4981]: I0128 16:14:14.497437 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_ce43940a-33fa-4da9-a910-a57dc6230e57/glance-log/0.log" Jan 28 16:14:14 crc kubenswrapper[4981]: I0128 16:14:14.599232 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6d9d89fcfb-mwsgh_d02db79a-7f4f-453c-8e92-2e8291f442f1/horizon/0.log" Jan 28 16:14:14 crc kubenswrapper[4981]: I0128 16:14:14.790847 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-s2n9z_d3bc3ef2-85fc-4c54-b065-b7d5d889b6d4/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:15 crc kubenswrapper[4981]: I0128 16:14:15.019781 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-7q8hj_7dd2589f-e346-4ce7-a193-1e8eac0a2318/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:15 crc kubenswrapper[4981]: I0128 16:14:15.038100 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6d9d89fcfb-mwsgh_d02db79a-7f4f-453c-8e92-2e8291f442f1/horizon-log/0.log" Jan 28 16:14:15 crc kubenswrapper[4981]: I0128 16:14:15.261993 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-569ff6748d-zhgp9_8f290ab1-489a-4b7e-9815-a6bd2a528f5e/keystone-api/0.log" Jan 28 16:14:15 crc kubenswrapper[4981]: I0128 16:14:15.270833 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29493601-rsfpp_46fad62e-3ca9-4842-a2c7-0e0fd654d37f/keystone-cron/0.log" Jan 28 16:14:15 crc kubenswrapper[4981]: I0128 16:14:15.468354 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_ead8d0cb-bf17-4ff6-b6ed-65c7205194cc/kube-state-metrics/0.log" Jan 28 16:14:15 crc kubenswrapper[4981]: I0128 16:14:15.581109 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-pqf6p_e78a3044-c335-4c2f-9fa6-314f2d40ef11/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:15 crc kubenswrapper[4981]: I0128 16:14:15.936350 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-787f455647-ngpww_b73f2a77-d6ea-418e-93a0-9d5a928637eb/neutron-api/0.log" Jan 28 16:14:15 crc kubenswrapper[4981]: I0128 16:14:15.942958 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-787f455647-ngpww_b73f2a77-d6ea-418e-93a0-9d5a928637eb/neutron-httpd/0.log" Jan 28 16:14:15 crc kubenswrapper[4981]: I0128 16:14:15.999928 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-v4rdp_fa2e6c63-891a-4395-8270-942b5d5f168f/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:17 crc kubenswrapper[4981]: I0128 16:14:17.009691 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c04c7269-f8ca-43e7-a204-d1ab4429f2f5/nova-api-log/0.log" Jan 28 16:14:17 crc kubenswrapper[4981]: I0128 16:14:17.161802 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f11d89fe-23cc-4fe1-b03e-c3c5e3613280/nova-cell0-conductor-conductor/0.log" Jan 28 16:14:17 crc kubenswrapper[4981]: I0128 16:14:17.359233 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f6994064-73bf-495e-928b-5ef46487e938/nova-cell1-conductor-conductor/0.log" Jan 28 16:14:17 crc kubenswrapper[4981]: I0128 16:14:17.594233 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_a7a16e56-277e-47a2-91e2-21a8ec2976db/nova-cell1-novncproxy-novncproxy/0.log" Jan 28 16:14:17 crc kubenswrapper[4981]: I0128 16:14:17.601924 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_c04c7269-f8ca-43e7-a204-d1ab4429f2f5/nova-api-api/0.log" Jan 28 16:14:17 crc kubenswrapper[4981]: I0128 16:14:17.663774 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-gx6fp_d6e35d22-36ba-4506-a8bf-f0a7f539502a/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:17 crc kubenswrapper[4981]: I0128 16:14:17.952118 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c/nova-metadata-log/0.log" Jan 28 16:14:18 crc kubenswrapper[4981]: I0128 16:14:18.566115 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd/mysql-bootstrap/0.log" Jan 28 16:14:18 crc kubenswrapper[4981]: I0128 16:14:18.784253 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd/mysql-bootstrap/0.log" Jan 28 16:14:18 crc kubenswrapper[4981]: I0128 16:14:18.848562 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_aef47cf6-3a65-4f6c-bcd4-68d658d4b1bd/galera/0.log" Jan 28 16:14:18 crc kubenswrapper[4981]: I0128 16:14:18.856367 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_8baced8e-6e29-4788-a841-d5c7d8a5e294/nova-scheduler-scheduler/0.log" Jan 28 16:14:19 crc kubenswrapper[4981]: I0128 16:14:19.051732 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee506ff0-7634-45eb-ac9f-5d5de1b3c40a/mysql-bootstrap/0.log" Jan 28 16:14:19 crc kubenswrapper[4981]: I0128 16:14:19.295360 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee506ff0-7634-45eb-ac9f-5d5de1b3c40a/mysql-bootstrap/0.log" Jan 28 16:14:19 crc kubenswrapper[4981]: I0128 16:14:19.332484 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_ee506ff0-7634-45eb-ac9f-5d5de1b3c40a/galera/0.log" Jan 28 16:14:19 crc kubenswrapper[4981]: I0128 16:14:19.408428 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_80f4d9f4-cc1c-4005-a9e5-f3251ff08c0c/nova-metadata-metadata/0.log" Jan 28 16:14:19 crc kubenswrapper[4981]: I0128 16:14:19.557609 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5b2d58d2-eda2-4c4c-9b5a-246b3440d2e6/openstackclient/0.log" Jan 28 16:14:19 crc kubenswrapper[4981]: I0128 16:14:19.580446 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-bnkpb_8109b11f-0a6a-4894-b7f7-c6d46a62570e/ovn-controller/0.log" Jan 28 16:14:19 crc kubenswrapper[4981]: I0128 16:14:19.747814 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-cbsgf_510c52de-24af-4fe2-833d-0990283aa110/openstack-network-exporter/0.log" Jan 28 16:14:19 crc kubenswrapper[4981]: I0128 16:14:19.780277 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c8dt7_124744f1-80e9-4fe2-8889-e13e0033ac84/ovsdb-server-init/0.log" Jan 28 16:14:20 crc kubenswrapper[4981]: I0128 16:14:20.055257 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c8dt7_124744f1-80e9-4fe2-8889-e13e0033ac84/ovs-vswitchd/0.log" Jan 28 16:14:20 crc kubenswrapper[4981]: I0128 16:14:20.058026 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c8dt7_124744f1-80e9-4fe2-8889-e13e0033ac84/ovsdb-server/0.log" Jan 28 16:14:20 crc kubenswrapper[4981]: I0128 16:14:20.124220 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-c8dt7_124744f1-80e9-4fe2-8889-e13e0033ac84/ovsdb-server-init/0.log" Jan 28 16:14:20 crc kubenswrapper[4981]: I0128 16:14:20.321590 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-mshfl_66c75472-5f94-47b6-bed5-94306835c5fa/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:20 crc kubenswrapper[4981]: I0128 16:14:20.389606 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_71b20415-1b79-4236-89db-42f2787cc2c2/openstack-network-exporter/0.log" Jan 28 16:14:20 crc kubenswrapper[4981]: I0128 16:14:20.408622 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_71b20415-1b79-4236-89db-42f2787cc2c2/ovn-northd/0.log" Jan 28 16:14:20 crc kubenswrapper[4981]: I0128 16:14:20.518725 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0a6c5d9b-a13a-42e8-9d15-f705822bb088/openstack-network-exporter/0.log" Jan 28 16:14:20 crc kubenswrapper[4981]: I0128 16:14:20.593663 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_0a6c5d9b-a13a-42e8-9d15-f705822bb088/ovsdbserver-nb/0.log" Jan 28 16:14:20 crc kubenswrapper[4981]: I0128 16:14:20.688375 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0f9063a3-5cdd-4e55-a714-79db63f3b8b9/openstack-network-exporter/0.log" Jan 28 16:14:20 crc kubenswrapper[4981]: I0128 16:14:20.818893 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_0f9063a3-5cdd-4e55-a714-79db63f3b8b9/ovsdbserver-sb/0.log" Jan 28 16:14:20 crc kubenswrapper[4981]: I0128 16:14:20.965157 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77d87cc6cd-znvvw_7e60cad3-42b0-4a56-be02-e4433ea5585f/placement-api/0.log" Jan 28 16:14:21 crc kubenswrapper[4981]: I0128 16:14:21.026983 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-77d87cc6cd-znvvw_7e60cad3-42b0-4a56-be02-e4433ea5585f/placement-log/0.log" Jan 28 16:14:21 crc kubenswrapper[4981]: I0128 16:14:21.080520 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8c327a08-ce8c-42f7-b305-cfc8b7f2d644/setup-container/0.log" Jan 28 16:14:21 crc kubenswrapper[4981]: I0128 16:14:21.266708 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8c327a08-ce8c-42f7-b305-cfc8b7f2d644/rabbitmq/0.log" Jan 28 16:14:21 crc kubenswrapper[4981]: I0128 16:14:21.272549 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_8c327a08-ce8c-42f7-b305-cfc8b7f2d644/setup-container/0.log" Jan 28 16:14:21 crc kubenswrapper[4981]: I0128 16:14:21.346514 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ead14a8f-5759-4a7c-b8a4-6560131c28d1/setup-container/0.log" Jan 28 16:14:21 crc kubenswrapper[4981]: I0128 16:14:21.481440 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ead14a8f-5759-4a7c-b8a4-6560131c28d1/setup-container/0.log" Jan 28 16:14:21 crc kubenswrapper[4981]: I0128 16:14:21.561562 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_ead14a8f-5759-4a7c-b8a4-6560131c28d1/rabbitmq/0.log" Jan 28 16:14:21 crc kubenswrapper[4981]: I0128 16:14:21.632484 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-qptj6_5e256fd3-d946-40f7-a93d-906351bf73f8/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:21 crc kubenswrapper[4981]: I0128 16:14:21.793598 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-pp86c_7f96c624-5794-4657-b6b9-00cccf2ac699/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:21 crc kubenswrapper[4981]: I0128 16:14:21.913866 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-kvvbd_4db01d71-54cf-49d3-a603-09ee1687a0d6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:22 crc kubenswrapper[4981]: I0128 16:14:22.051862 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-jz8bs_605a9090-e629-463f-9119-7229674dccc7/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:22 crc kubenswrapper[4981]: I0128 16:14:22.151754 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gllft_2531e40d-1556-4368-b4db-be4d6364097a/ssh-known-hosts-edpm-deployment/0.log" Jan 28 16:14:22 crc kubenswrapper[4981]: I0128 16:14:22.316956 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d79b67667-4jvlp_f3854c5d-2ac4-48d0-96df-a96b2fa5feb7/proxy-server/0.log" Jan 28 16:14:22 crc kubenswrapper[4981]: I0128 16:14:22.505630 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-d79b67667-4jvlp_f3854c5d-2ac4-48d0-96df-a96b2fa5feb7/proxy-httpd/0.log" Jan 28 16:14:22 crc kubenswrapper[4981]: I0128 16:14:22.565438 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-zs4xh_a59295bc-49fa-4b41-b2a1-3c19c27292e5/swift-ring-rebalance/0.log" Jan 28 16:14:22 crc kubenswrapper[4981]: I0128 16:14:22.620267 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/account-auditor/0.log" Jan 28 16:14:22 crc kubenswrapper[4981]: I0128 16:14:22.758041 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/account-reaper/0.log" Jan 28 16:14:22 crc kubenswrapper[4981]: I0128 16:14:22.800772 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/container-auditor/0.log" Jan 28 16:14:22 crc kubenswrapper[4981]: I0128 16:14:22.823908 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/account-server/0.log" Jan 28 16:14:22 crc kubenswrapper[4981]: I0128 16:14:22.824541 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/account-replicator/0.log" Jan 28 16:14:22 crc kubenswrapper[4981]: I0128 16:14:22.960637 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/container-replicator/0.log" Jan 28 16:14:23 crc kubenswrapper[4981]: I0128 16:14:23.007359 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/container-server/0.log" Jan 28 16:14:23 crc kubenswrapper[4981]: I0128 16:14:23.084158 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/container-updater/0.log" Jan 28 16:14:23 crc kubenswrapper[4981]: I0128 16:14:23.102021 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/object-auditor/0.log" Jan 28 16:14:23 crc kubenswrapper[4981]: I0128 16:14:23.159014 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/object-expirer/0.log" Jan 28 16:14:23 crc kubenswrapper[4981]: I0128 16:14:23.276815 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/object-replicator/0.log" Jan 28 16:14:23 crc kubenswrapper[4981]: I0128 16:14:23.293494 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/object-server/0.log" Jan 28 16:14:23 crc kubenswrapper[4981]: I0128 16:14:23.319711 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/object-updater/0.log" Jan 28 16:14:23 crc kubenswrapper[4981]: I0128 16:14:23.357963 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/rsync/0.log" Jan 28 16:14:23 crc kubenswrapper[4981]: I0128 16:14:23.496110 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_a3c5f4dc-185e-4293-9853-f16cde7997fa/swift-recon-cron/0.log" Jan 28 16:14:23 crc kubenswrapper[4981]: I0128 16:14:23.651472 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-rj9tj_b2a912b2-d6a7-4cd5-8cba-66b942182410/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:23 crc kubenswrapper[4981]: I0128 16:14:23.780540 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_c34b143a-0284-461d-a788-106a5f6dca6c/tempest-tests-tempest-tests-runner/0.log" Jan 28 16:14:23 crc kubenswrapper[4981]: I0128 16:14:23.871753 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_b9d341b2-c188-4cb6-a39a-0313e67fac6e/test-operator-logs-container/0.log" Jan 28 16:14:24 crc kubenswrapper[4981]: I0128 16:14:24.261691 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-ckpts_f654c5ca-b187-484f-b9bd-c487bda39586/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 16:14:36 crc kubenswrapper[4981]: I0128 16:14:36.257358 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_3bab2457-dbba-4fa0-b0c7-0b05a9546bc6/memcached/0.log" Jan 28 16:14:49 crc kubenswrapper[4981]: I0128 16:14:49.898044 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:14:49 crc kubenswrapper[4981]: I0128 16:14:49.898556 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:14:52 crc kubenswrapper[4981]: I0128 16:14:52.866863 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd_1c96985c-93d1-4967-83e7-0794b3159ca9/util/0.log" Jan 28 16:14:53 crc kubenswrapper[4981]: I0128 16:14:53.275287 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd_1c96985c-93d1-4967-83e7-0794b3159ca9/pull/0.log" Jan 28 16:14:53 crc kubenswrapper[4981]: I0128 16:14:53.310295 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd_1c96985c-93d1-4967-83e7-0794b3159ca9/util/0.log" Jan 28 16:14:53 crc kubenswrapper[4981]: I0128 16:14:53.317922 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd_1c96985c-93d1-4967-83e7-0794b3159ca9/pull/0.log" Jan 28 16:14:53 crc kubenswrapper[4981]: I0128 16:14:53.456696 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd_1c96985c-93d1-4967-83e7-0794b3159ca9/util/0.log" Jan 28 16:14:53 crc kubenswrapper[4981]: I0128 16:14:53.469141 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd_1c96985c-93d1-4967-83e7-0794b3159ca9/extract/0.log" Jan 28 16:14:53 crc kubenswrapper[4981]: I0128 16:14:53.502513 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b38dc63d3b17e0e5a0a9ff0e8b53e07302a988f1e455c1225be35abef8gt5gd_1c96985c-93d1-4967-83e7-0794b3159ca9/pull/0.log" Jan 28 16:14:53 crc kubenswrapper[4981]: I0128 16:14:53.772826 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-gmjsg_5d37dcb3-e31a-40e3-ba16-803490369e86/manager/0.log" Jan 28 16:14:53 crc kubenswrapper[4981]: I0128 16:14:53.799159 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-ddkfp_e2393e50-201d-45e8-96c8-f2bfba6fed7c/manager/0.log" Jan 28 16:14:53 crc kubenswrapper[4981]: I0128 16:14:53.872532 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-gmd8n_0a5805bf-96b2-4893-8811-603eacec1cba/manager/0.log" Jan 28 16:14:54 crc kubenswrapper[4981]: I0128 16:14:54.045253 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-c42cn_94e20f49-bc4f-4bbb-9c67-7f3dc5b925b5/manager/0.log" Jan 28 16:14:54 crc kubenswrapper[4981]: I0128 16:14:54.093752 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-mqpdl_9ea05521-0dfa-4175-b394-1b5e55fc4c7f/manager/0.log" Jan 28 16:14:54 crc kubenswrapper[4981]: I0128 16:14:54.254917 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-4nfrz_26db2e76-9e22-4b02-8c7f-6ae79127ae41/manager/0.log" Jan 28 16:14:54 crc kubenswrapper[4981]: I0128 16:14:54.480515 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-wq22r_e2b38a13-7a0c-4836-9abc-be0e65837eb9/manager/0.log" Jan 28 16:14:54 crc kubenswrapper[4981]: I0128 16:14:54.566429 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-mk8hk_28d521cc-409b-485a-809b-98e3e552c042/manager/0.log" Jan 28 16:14:54 crc kubenswrapper[4981]: I0128 16:14:54.666436 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-6gnfx_6c02a009-565d-4217-9d71-ca0505f90cb0/manager/0.log" Jan 28 16:14:54 crc kubenswrapper[4981]: I0128 16:14:54.789756 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-pqffr_47340953-e89f-4a20-bbd6-0e25c39b810a/manager/0.log" Jan 28 16:14:54 crc kubenswrapper[4981]: I0128 16:14:54.907009 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-w8d2t_9f95744b-30e0-4d4f-9911-12ca57813aff/manager/0.log" Jan 28 16:14:55 crc kubenswrapper[4981]: I0128 16:14:55.029725 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-wjvvk_05d30f3a-fdc7-4b65-a93b-747718217906/manager/0.log" Jan 28 16:14:55 crc kubenswrapper[4981]: I0128 16:14:55.222865 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-kd8bc_462b383e-f994-4f35-a29c-6be57d7fd20c/manager/0.log" Jan 28 16:14:55 crc kubenswrapper[4981]: I0128 16:14:55.255630 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-7tgrh_305a2f40-90a1-4e46-83a6-0ae818e35157/manager/0.log" Jan 28 16:14:55 crc kubenswrapper[4981]: I0128 16:14:55.401055 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854qllt9_655712aa-6ff8-4f99-ac13-85a3def79e97/manager/0.log" Jan 28 16:14:55 crc kubenswrapper[4981]: I0128 16:14:55.536508 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-86c9dcbc4-gnfc9_16d249a4-617e-4f09-9fca-93b89b337167/operator/0.log" Jan 28 16:14:55 crc kubenswrapper[4981]: I0128 16:14:55.755224 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cpdk7_1d4745c7-b014-492d-936f-d4c430359df3/registry-server/0.log" Jan 28 16:14:56 crc kubenswrapper[4981]: I0128 16:14:56.318038 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-v4hcv_21991fd6-b7f4-48cc-b372-5e43be416857/manager/0.log" Jan 28 16:14:56 crc kubenswrapper[4981]: I0128 16:14:56.436713 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-8rf4r_34ddd48f-2151-4df8-af17-70b926965a9e/manager/0.log" Jan 28 16:14:56 crc kubenswrapper[4981]: I0128 16:14:56.685590 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-xjp7n_ad2c98b1-4994-4602-af9f-6dce33122651/operator/0.log" Jan 28 16:14:56 crc kubenswrapper[4981]: I0128 16:14:56.764510 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-fcdbf6b45-9f88t_b1fc4c9e-98f7-4f04-93c7-7dfa60d15e74/manager/0.log" Jan 28 16:14:56 crc kubenswrapper[4981]: I0128 16:14:56.858927 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-6qxz5_7338b601-fe21-458b-97b8-99977fcdb582/manager/0.log" Jan 28 16:14:56 crc kubenswrapper[4981]: I0128 16:14:56.923692 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-f8ckc_18a8ea11-fca0-4503-a458-90ae9e542401/manager/0.log" Jan 28 16:14:57 crc kubenswrapper[4981]: I0128 16:14:57.051372 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-pj2hb_b4355527-cc7c-436f-a9b0-69f4860f0e36/manager/0.log" Jan 28 16:14:57 crc kubenswrapper[4981]: I0128 16:14:57.120753 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-65v5g_82289c62-674e-483e-ac47-f09b000a0c90/manager/0.log" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.181225 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq"] Jan 28 16:15:00 crc kubenswrapper[4981]: E0128 16:15:00.182220 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c54246-f24f-4867-a2d4-dfcd7bb3f082" containerName="extract-utilities" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.182240 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c54246-f24f-4867-a2d4-dfcd7bb3f082" containerName="extract-utilities" Jan 28 16:15:00 crc kubenswrapper[4981]: E0128 16:15:00.182252 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c54246-f24f-4867-a2d4-dfcd7bb3f082" containerName="registry-server" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.182258 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c54246-f24f-4867-a2d4-dfcd7bb3f082" containerName="registry-server" Jan 28 16:15:00 crc kubenswrapper[4981]: E0128 16:15:00.182280 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5c54246-f24f-4867-a2d4-dfcd7bb3f082" containerName="extract-content" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.182286 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5c54246-f24f-4867-a2d4-dfcd7bb3f082" containerName="extract-content" Jan 28 16:15:00 crc kubenswrapper[4981]: E0128 16:15:00.182294 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffdf9313-b2e1-46b2-a77e-f4e046638535" containerName="container-00" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.182300 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffdf9313-b2e1-46b2-a77e-f4e046638535" containerName="container-00" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.182494 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5c54246-f24f-4867-a2d4-dfcd7bb3f082" containerName="registry-server" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.182523 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffdf9313-b2e1-46b2-a77e-f4e046638535" containerName="container-00" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.188580 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.192697 4981 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.192920 4981 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.201319 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq"] Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.291947 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f596339a-bf7a-4578-9845-6f876f70ee95-config-volume\") pod \"collect-profiles-29493615-h5mqq\" (UID: \"f596339a-bf7a-4578-9845-6f876f70ee95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.291994 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f596339a-bf7a-4578-9845-6f876f70ee95-secret-volume\") pod \"collect-profiles-29493615-h5mqq\" (UID: \"f596339a-bf7a-4578-9845-6f876f70ee95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.292044 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr4f2\" (UniqueName: \"kubernetes.io/projected/f596339a-bf7a-4578-9845-6f876f70ee95-kube-api-access-pr4f2\") pod \"collect-profiles-29493615-h5mqq\" (UID: \"f596339a-bf7a-4578-9845-6f876f70ee95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.393043 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr4f2\" (UniqueName: \"kubernetes.io/projected/f596339a-bf7a-4578-9845-6f876f70ee95-kube-api-access-pr4f2\") pod \"collect-profiles-29493615-h5mqq\" (UID: \"f596339a-bf7a-4578-9845-6f876f70ee95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.393271 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f596339a-bf7a-4578-9845-6f876f70ee95-config-volume\") pod \"collect-profiles-29493615-h5mqq\" (UID: \"f596339a-bf7a-4578-9845-6f876f70ee95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.393296 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f596339a-bf7a-4578-9845-6f876f70ee95-secret-volume\") pod \"collect-profiles-29493615-h5mqq\" (UID: \"f596339a-bf7a-4578-9845-6f876f70ee95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.394627 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f596339a-bf7a-4578-9845-6f876f70ee95-config-volume\") pod \"collect-profiles-29493615-h5mqq\" (UID: \"f596339a-bf7a-4578-9845-6f876f70ee95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.399796 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f596339a-bf7a-4578-9845-6f876f70ee95-secret-volume\") pod \"collect-profiles-29493615-h5mqq\" (UID: \"f596339a-bf7a-4578-9845-6f876f70ee95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.414605 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr4f2\" (UniqueName: \"kubernetes.io/projected/f596339a-bf7a-4578-9845-6f876f70ee95-kube-api-access-pr4f2\") pod \"collect-profiles-29493615-h5mqq\" (UID: \"f596339a-bf7a-4578-9845-6f876f70ee95\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.511739 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" Jan 28 16:15:00 crc kubenswrapper[4981]: I0128 16:15:00.979671 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq"] Jan 28 16:15:01 crc kubenswrapper[4981]: I0128 16:15:01.506874 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" event={"ID":"f596339a-bf7a-4578-9845-6f876f70ee95","Type":"ContainerStarted","Data":"8e333085717c7c124f06f2f2557af36f473a55f1a39fbb4937da32e98887f791"} Jan 28 16:15:01 crc kubenswrapper[4981]: I0128 16:15:01.507216 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" event={"ID":"f596339a-bf7a-4578-9845-6f876f70ee95","Type":"ContainerStarted","Data":"9bcfb6ff93dc96839d0adef25d45a1effafa4b8e50c8dc66499a41070455cdb7"} Jan 28 16:15:01 crc kubenswrapper[4981]: I0128 16:15:01.532483 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" podStartSLOduration=1.5324519109999999 podStartE2EDuration="1.532451911s" podCreationTimestamp="2026-01-28 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 16:15:01.525784965 +0000 UTC m=+4312.977943206" watchObservedRunningTime="2026-01-28 16:15:01.532451911 +0000 UTC m=+4312.984610152" Jan 28 16:15:02 crc kubenswrapper[4981]: I0128 16:15:02.516486 4981 generic.go:334] "Generic (PLEG): container finished" podID="f596339a-bf7a-4578-9845-6f876f70ee95" containerID="8e333085717c7c124f06f2f2557af36f473a55f1a39fbb4937da32e98887f791" exitCode=0 Jan 28 16:15:02 crc kubenswrapper[4981]: I0128 16:15:02.517416 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" event={"ID":"f596339a-bf7a-4578-9845-6f876f70ee95","Type":"ContainerDied","Data":"8e333085717c7c124f06f2f2557af36f473a55f1a39fbb4937da32e98887f791"} Jan 28 16:15:03 crc kubenswrapper[4981]: I0128 16:15:03.927385 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" Jan 28 16:15:03 crc kubenswrapper[4981]: I0128 16:15:03.992003 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr4f2\" (UniqueName: \"kubernetes.io/projected/f596339a-bf7a-4578-9845-6f876f70ee95-kube-api-access-pr4f2\") pod \"f596339a-bf7a-4578-9845-6f876f70ee95\" (UID: \"f596339a-bf7a-4578-9845-6f876f70ee95\") " Jan 28 16:15:03 crc kubenswrapper[4981]: I0128 16:15:03.992058 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f596339a-bf7a-4578-9845-6f876f70ee95-secret-volume\") pod \"f596339a-bf7a-4578-9845-6f876f70ee95\" (UID: \"f596339a-bf7a-4578-9845-6f876f70ee95\") " Jan 28 16:15:03 crc kubenswrapper[4981]: I0128 16:15:03.992115 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f596339a-bf7a-4578-9845-6f876f70ee95-config-volume\") pod \"f596339a-bf7a-4578-9845-6f876f70ee95\" (UID: \"f596339a-bf7a-4578-9845-6f876f70ee95\") " Jan 28 16:15:03 crc kubenswrapper[4981]: I0128 16:15:03.997419 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f596339a-bf7a-4578-9845-6f876f70ee95-config-volume" (OuterVolumeSpecName: "config-volume") pod "f596339a-bf7a-4578-9845-6f876f70ee95" (UID: "f596339a-bf7a-4578-9845-6f876f70ee95"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 16:15:04 crc kubenswrapper[4981]: I0128 16:15:04.004667 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f596339a-bf7a-4578-9845-6f876f70ee95-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f596339a-bf7a-4578-9845-6f876f70ee95" (UID: "f596339a-bf7a-4578-9845-6f876f70ee95"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 16:15:04 crc kubenswrapper[4981]: I0128 16:15:04.004802 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f596339a-bf7a-4578-9845-6f876f70ee95-kube-api-access-pr4f2" (OuterVolumeSpecName: "kube-api-access-pr4f2") pod "f596339a-bf7a-4578-9845-6f876f70ee95" (UID: "f596339a-bf7a-4578-9845-6f876f70ee95"). InnerVolumeSpecName "kube-api-access-pr4f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:15:04 crc kubenswrapper[4981]: I0128 16:15:04.094929 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr4f2\" (UniqueName: \"kubernetes.io/projected/f596339a-bf7a-4578-9845-6f876f70ee95-kube-api-access-pr4f2\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:04 crc kubenswrapper[4981]: I0128 16:15:04.094970 4981 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f596339a-bf7a-4578-9845-6f876f70ee95-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:04 crc kubenswrapper[4981]: I0128 16:15:04.094981 4981 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f596339a-bf7a-4578-9845-6f876f70ee95-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:04 crc kubenswrapper[4981]: I0128 16:15:04.534473 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" event={"ID":"f596339a-bf7a-4578-9845-6f876f70ee95","Type":"ContainerDied","Data":"9bcfb6ff93dc96839d0adef25d45a1effafa4b8e50c8dc66499a41070455cdb7"} Jan 28 16:15:04 crc kubenswrapper[4981]: I0128 16:15:04.534526 4981 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bcfb6ff93dc96839d0adef25d45a1effafa4b8e50c8dc66499a41070455cdb7" Jan 28 16:15:04 crc kubenswrapper[4981]: I0128 16:15:04.534524 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493615-h5mqq" Jan 28 16:15:04 crc kubenswrapper[4981]: I0128 16:15:04.610220 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6"] Jan 28 16:15:04 crc kubenswrapper[4981]: I0128 16:15:04.622116 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493570-lm6z6"] Jan 28 16:15:05 crc kubenswrapper[4981]: I0128 16:15:05.329165 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b" path="/var/lib/kubelet/pods/b8458a57-c2f2-44af-a8ff-c2c8cec9ff0b/volumes" Jan 28 16:15:18 crc kubenswrapper[4981]: I0128 16:15:18.175847 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-x2wjc_313fb5fa-63ee-4008-9e6c-94adc6fa6e67/control-plane-machine-set-operator/0.log" Jan 28 16:15:18 crc kubenswrapper[4981]: I0128 16:15:18.366757 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rb46f_7bc7864e-dc24-4885-b829-e9ee56d0bb2a/kube-rbac-proxy/0.log" Jan 28 16:15:18 crc kubenswrapper[4981]: I0128 16:15:18.392060 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-rb46f_7bc7864e-dc24-4885-b829-e9ee56d0bb2a/machine-api-operator/0.log" Jan 28 16:15:19 crc kubenswrapper[4981]: I0128 16:15:19.897528 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:15:19 crc kubenswrapper[4981]: I0128 16:15:19.897828 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:15:31 crc kubenswrapper[4981]: I0128 16:15:31.509554 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-rds8t_275dc545-afa7-4d22-9c2e-bc41e21e187f/cert-manager-controller/0.log" Jan 28 16:15:31 crc kubenswrapper[4981]: I0128 16:15:31.623512 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-lzqwx_531e0ce3-f8d2-423f-8934-5427dca677c8/cert-manager-cainjector/0.log" Jan 28 16:15:31 crc kubenswrapper[4981]: I0128 16:15:31.727543 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-m9hjc_499b468f-8150-49ce-9ec6-964f94f1234d/cert-manager-webhook/0.log" Jan 28 16:15:36 crc kubenswrapper[4981]: I0128 16:15:36.041823 4981 scope.go:117] "RemoveContainer" containerID="33cf86988a93e27a839eca4398708a38c1afe30d8ab65c64dad17c7d1099bf8b" Jan 28 16:15:43 crc kubenswrapper[4981]: I0128 16:15:43.340181 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rrlqc"] Jan 28 16:15:43 crc kubenswrapper[4981]: E0128 16:15:43.341057 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f596339a-bf7a-4578-9845-6f876f70ee95" containerName="collect-profiles" Jan 28 16:15:43 crc kubenswrapper[4981]: I0128 16:15:43.341071 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="f596339a-bf7a-4578-9845-6f876f70ee95" containerName="collect-profiles" Jan 28 16:15:43 crc kubenswrapper[4981]: I0128 16:15:43.341297 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="f596339a-bf7a-4578-9845-6f876f70ee95" containerName="collect-profiles" Jan 28 16:15:43 crc kubenswrapper[4981]: I0128 16:15:43.342669 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:43 crc kubenswrapper[4981]: I0128 16:15:43.389828 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rrlqc"] Jan 28 16:15:43 crc kubenswrapper[4981]: I0128 16:15:43.452313 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjj2z\" (UniqueName: \"kubernetes.io/projected/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-kube-api-access-fjj2z\") pod \"certified-operators-rrlqc\" (UID: \"48f1e504-3786-4fd3-8b58-ed5e9b830b5e\") " pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:43 crc kubenswrapper[4981]: I0128 16:15:43.452825 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-catalog-content\") pod \"certified-operators-rrlqc\" (UID: \"48f1e504-3786-4fd3-8b58-ed5e9b830b5e\") " pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:43 crc kubenswrapper[4981]: I0128 16:15:43.452913 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-utilities\") pod \"certified-operators-rrlqc\" (UID: \"48f1e504-3786-4fd3-8b58-ed5e9b830b5e\") " pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:43 crc kubenswrapper[4981]: I0128 16:15:43.554591 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjj2z\" (UniqueName: \"kubernetes.io/projected/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-kube-api-access-fjj2z\") pod \"certified-operators-rrlqc\" (UID: \"48f1e504-3786-4fd3-8b58-ed5e9b830b5e\") " pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:43 crc kubenswrapper[4981]: I0128 16:15:43.554660 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-catalog-content\") pod \"certified-operators-rrlqc\" (UID: \"48f1e504-3786-4fd3-8b58-ed5e9b830b5e\") " pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:43 crc kubenswrapper[4981]: I0128 16:15:43.554684 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-utilities\") pod \"certified-operators-rrlqc\" (UID: \"48f1e504-3786-4fd3-8b58-ed5e9b830b5e\") " pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:43 crc kubenswrapper[4981]: I0128 16:15:43.555264 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-utilities\") pod \"certified-operators-rrlqc\" (UID: \"48f1e504-3786-4fd3-8b58-ed5e9b830b5e\") " pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:43 crc kubenswrapper[4981]: I0128 16:15:43.555733 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-catalog-content\") pod \"certified-operators-rrlqc\" (UID: \"48f1e504-3786-4fd3-8b58-ed5e9b830b5e\") " pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:43 crc kubenswrapper[4981]: I0128 16:15:43.577052 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjj2z\" (UniqueName: \"kubernetes.io/projected/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-kube-api-access-fjj2z\") pod \"certified-operators-rrlqc\" (UID: \"48f1e504-3786-4fd3-8b58-ed5e9b830b5e\") " pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:43 crc kubenswrapper[4981]: I0128 16:15:43.678558 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:44 crc kubenswrapper[4981]: I0128 16:15:44.257991 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rrlqc"] Jan 28 16:15:44 crc kubenswrapper[4981]: I0128 16:15:44.888556 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-mnsnb_e4078e50-5cc6-45b4-8a9a-3a37c51537fa/nmstate-console-plugin/0.log" Jan 28 16:15:44 crc kubenswrapper[4981]: I0128 16:15:44.914946 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-kggn8_ee7ee978-971b-4e70-ac41-8a6c8f10b226/nmstate-handler/0.log" Jan 28 16:15:44 crc kubenswrapper[4981]: I0128 16:15:44.922670 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrlqc" event={"ID":"48f1e504-3786-4fd3-8b58-ed5e9b830b5e","Type":"ContainerStarted","Data":"e7297545f06bc1ca85e0e6eea18dd74213cc0306973c864aa5790f6a264356b0"} Jan 28 16:15:44 crc kubenswrapper[4981]: I0128 16:15:44.922722 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrlqc" event={"ID":"48f1e504-3786-4fd3-8b58-ed5e9b830b5e","Type":"ContainerStarted","Data":"5f4fabf2a621b57c8041929d748ccea50b659baa72667f48e3ccb28a0b89a559"} Jan 28 16:15:45 crc kubenswrapper[4981]: I0128 16:15:45.080957 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-lps8k_802eac95-d452-45f0-b0a2-765f410e4a6c/kube-rbac-proxy/0.log" Jan 28 16:15:45 crc kubenswrapper[4981]: I0128 16:15:45.106683 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-lps8k_802eac95-d452-45f0-b0a2-765f410e4a6c/nmstate-metrics/0.log" Jan 28 16:15:45 crc kubenswrapper[4981]: I0128 16:15:45.286082 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-m7qf9_44612930-8e0e-4893-9f15-58b828449dbb/nmstate-operator/0.log" Jan 28 16:15:45 crc kubenswrapper[4981]: I0128 16:15:45.352458 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-knc2t_efc29e7c-2d98-40bb-8335-5c763f217be4/nmstate-webhook/0.log" Jan 28 16:15:45 crc kubenswrapper[4981]: I0128 16:15:45.933247 4981 generic.go:334] "Generic (PLEG): container finished" podID="48f1e504-3786-4fd3-8b58-ed5e9b830b5e" containerID="e7297545f06bc1ca85e0e6eea18dd74213cc0306973c864aa5790f6a264356b0" exitCode=0 Jan 28 16:15:45 crc kubenswrapper[4981]: I0128 16:15:45.933328 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrlqc" event={"ID":"48f1e504-3786-4fd3-8b58-ed5e9b830b5e","Type":"ContainerDied","Data":"e7297545f06bc1ca85e0e6eea18dd74213cc0306973c864aa5790f6a264356b0"} Jan 28 16:15:46 crc kubenswrapper[4981]: I0128 16:15:46.537883 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hgdvh"] Jan 28 16:15:46 crc kubenswrapper[4981]: I0128 16:15:46.540654 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:46 crc kubenswrapper[4981]: I0128 16:15:46.564498 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hgdvh"] Jan 28 16:15:46 crc kubenswrapper[4981]: I0128 16:15:46.625347 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f773c23-5ad3-4f69-b968-66185e793bb7-catalog-content\") pod \"redhat-marketplace-hgdvh\" (UID: \"5f773c23-5ad3-4f69-b968-66185e793bb7\") " pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:46 crc kubenswrapper[4981]: I0128 16:15:46.625471 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqkxs\" (UniqueName: \"kubernetes.io/projected/5f773c23-5ad3-4f69-b968-66185e793bb7-kube-api-access-lqkxs\") pod \"redhat-marketplace-hgdvh\" (UID: \"5f773c23-5ad3-4f69-b968-66185e793bb7\") " pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:46 crc kubenswrapper[4981]: I0128 16:15:46.625512 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f773c23-5ad3-4f69-b968-66185e793bb7-utilities\") pod \"redhat-marketplace-hgdvh\" (UID: \"5f773c23-5ad3-4f69-b968-66185e793bb7\") " pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:46 crc kubenswrapper[4981]: I0128 16:15:46.727374 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqkxs\" (UniqueName: \"kubernetes.io/projected/5f773c23-5ad3-4f69-b968-66185e793bb7-kube-api-access-lqkxs\") pod \"redhat-marketplace-hgdvh\" (UID: \"5f773c23-5ad3-4f69-b968-66185e793bb7\") " pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:46 crc kubenswrapper[4981]: I0128 16:15:46.727434 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f773c23-5ad3-4f69-b968-66185e793bb7-utilities\") pod \"redhat-marketplace-hgdvh\" (UID: \"5f773c23-5ad3-4f69-b968-66185e793bb7\") " pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:46 crc kubenswrapper[4981]: I0128 16:15:46.727528 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f773c23-5ad3-4f69-b968-66185e793bb7-catalog-content\") pod \"redhat-marketplace-hgdvh\" (UID: \"5f773c23-5ad3-4f69-b968-66185e793bb7\") " pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:46 crc kubenswrapper[4981]: I0128 16:15:46.727965 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f773c23-5ad3-4f69-b968-66185e793bb7-catalog-content\") pod \"redhat-marketplace-hgdvh\" (UID: \"5f773c23-5ad3-4f69-b968-66185e793bb7\") " pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:46 crc kubenswrapper[4981]: I0128 16:15:46.728062 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f773c23-5ad3-4f69-b968-66185e793bb7-utilities\") pod \"redhat-marketplace-hgdvh\" (UID: \"5f773c23-5ad3-4f69-b968-66185e793bb7\") " pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:46 crc kubenswrapper[4981]: I0128 16:15:46.750022 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqkxs\" (UniqueName: \"kubernetes.io/projected/5f773c23-5ad3-4f69-b968-66185e793bb7-kube-api-access-lqkxs\") pod \"redhat-marketplace-hgdvh\" (UID: \"5f773c23-5ad3-4f69-b968-66185e793bb7\") " pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:46 crc kubenswrapper[4981]: I0128 16:15:46.875160 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:46 crc kubenswrapper[4981]: I0128 16:15:46.949778 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrlqc" event={"ID":"48f1e504-3786-4fd3-8b58-ed5e9b830b5e","Type":"ContainerStarted","Data":"d01822932cd534761f3ef6f5d49a71686cecce45d799b5f6320435540086ed41"} Jan 28 16:15:47 crc kubenswrapper[4981]: I0128 16:15:47.962988 4981 generic.go:334] "Generic (PLEG): container finished" podID="48f1e504-3786-4fd3-8b58-ed5e9b830b5e" containerID="d01822932cd534761f3ef6f5d49a71686cecce45d799b5f6320435540086ed41" exitCode=0 Jan 28 16:15:47 crc kubenswrapper[4981]: I0128 16:15:47.963034 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrlqc" event={"ID":"48f1e504-3786-4fd3-8b58-ed5e9b830b5e","Type":"ContainerDied","Data":"d01822932cd534761f3ef6f5d49a71686cecce45d799b5f6320435540086ed41"} Jan 28 16:15:48 crc kubenswrapper[4981]: I0128 16:15:48.094448 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hgdvh"] Jan 28 16:15:48 crc kubenswrapper[4981]: W0128 16:15:48.094850 4981 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f773c23_5ad3_4f69_b968_66185e793bb7.slice/crio-44893b04d94ea067ca3cab28d625df45e024b38916698c8740f870fc011f756d WatchSource:0}: Error finding container 44893b04d94ea067ca3cab28d625df45e024b38916698c8740f870fc011f756d: Status 404 returned error can't find the container with id 44893b04d94ea067ca3cab28d625df45e024b38916698c8740f870fc011f756d Jan 28 16:15:48 crc kubenswrapper[4981]: I0128 16:15:48.974387 4981 generic.go:334] "Generic (PLEG): container finished" podID="5f773c23-5ad3-4f69-b968-66185e793bb7" containerID="a73de0fd5182a1a700f1aeac1f7bdd11dc8bee5e804dee84c2b74c5c021bfd29" exitCode=0 Jan 28 16:15:48 crc kubenswrapper[4981]: I0128 16:15:48.974586 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgdvh" event={"ID":"5f773c23-5ad3-4f69-b968-66185e793bb7","Type":"ContainerDied","Data":"a73de0fd5182a1a700f1aeac1f7bdd11dc8bee5e804dee84c2b74c5c021bfd29"} Jan 28 16:15:48 crc kubenswrapper[4981]: I0128 16:15:48.974725 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgdvh" event={"ID":"5f773c23-5ad3-4f69-b968-66185e793bb7","Type":"ContainerStarted","Data":"44893b04d94ea067ca3cab28d625df45e024b38916698c8740f870fc011f756d"} Jan 28 16:15:49 crc kubenswrapper[4981]: I0128 16:15:49.897725 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:15:49 crc kubenswrapper[4981]: I0128 16:15:49.897971 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:15:49 crc kubenswrapper[4981]: I0128 16:15:49.898009 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 16:15:49 crc kubenswrapper[4981]: I0128 16:15:49.898701 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4ab0a2849dc2e00c4edd50099e12dfbd084b3aa7c1423fcb7eb0555a7c8c82d4"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:15:49 crc kubenswrapper[4981]: I0128 16:15:49.898756 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://4ab0a2849dc2e00c4edd50099e12dfbd084b3aa7c1423fcb7eb0555a7c8c82d4" gracePeriod=600 Jan 28 16:15:49 crc kubenswrapper[4981]: I0128 16:15:49.990717 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrlqc" event={"ID":"48f1e504-3786-4fd3-8b58-ed5e9b830b5e","Type":"ContainerStarted","Data":"3e9ec4d077352d6edae8ae17b2f9608df2e23c6406b367813377a051d4c062f3"} Jan 28 16:15:50 crc kubenswrapper[4981]: I0128 16:15:50.020548 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rrlqc" podStartSLOduration=4.057463615 podStartE2EDuration="7.020525088s" podCreationTimestamp="2026-01-28 16:15:43 +0000 UTC" firstStartedPulling="2026-01-28 16:15:45.935860393 +0000 UTC m=+4357.388018634" lastFinishedPulling="2026-01-28 16:15:48.898921856 +0000 UTC m=+4360.351080107" observedRunningTime="2026-01-28 16:15:50.015747142 +0000 UTC m=+4361.467905383" watchObservedRunningTime="2026-01-28 16:15:50.020525088 +0000 UTC m=+4361.472683329" Jan 28 16:15:51 crc kubenswrapper[4981]: I0128 16:15:51.017372 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="4ab0a2849dc2e00c4edd50099e12dfbd084b3aa7c1423fcb7eb0555a7c8c82d4" exitCode=0 Jan 28 16:15:51 crc kubenswrapper[4981]: I0128 16:15:51.017938 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"4ab0a2849dc2e00c4edd50099e12dfbd084b3aa7c1423fcb7eb0555a7c8c82d4"} Jan 28 16:15:51 crc kubenswrapper[4981]: I0128 16:15:51.019158 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerStarted","Data":"f760dbb0a2aa53beaa9da2cbd8b1ed868e5b38df09765a2c7d270efdc3c9fe7e"} Jan 28 16:15:51 crc kubenswrapper[4981]: I0128 16:15:51.019277 4981 scope.go:117] "RemoveContainer" containerID="849bd4da9a41206a9192599576052002079d29ec771478191931ac8c4688c539" Jan 28 16:15:51 crc kubenswrapper[4981]: I0128 16:15:51.042491 4981 generic.go:334] "Generic (PLEG): container finished" podID="5f773c23-5ad3-4f69-b968-66185e793bb7" containerID="c516fdf633292db63e20ea69aae403b79744557e0388d1b2d14343b8e2ce2630" exitCode=0 Jan 28 16:15:51 crc kubenswrapper[4981]: I0128 16:15:51.044216 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgdvh" event={"ID":"5f773c23-5ad3-4f69-b968-66185e793bb7","Type":"ContainerDied","Data":"c516fdf633292db63e20ea69aae403b79744557e0388d1b2d14343b8e2ce2630"} Jan 28 16:15:53 crc kubenswrapper[4981]: I0128 16:15:53.069806 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgdvh" event={"ID":"5f773c23-5ad3-4f69-b968-66185e793bb7","Type":"ContainerStarted","Data":"c64e4eec96a7eadf10fe72bc7d9a494b8077ddcfcb75837a17919622a4c8c676"} Jan 28 16:15:53 crc kubenswrapper[4981]: I0128 16:15:53.095529 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hgdvh" podStartSLOduration=4.219600446 podStartE2EDuration="7.095511611s" podCreationTimestamp="2026-01-28 16:15:46 +0000 UTC" firstStartedPulling="2026-01-28 16:15:48.978059352 +0000 UTC m=+4360.430217603" lastFinishedPulling="2026-01-28 16:15:51.853970527 +0000 UTC m=+4363.306128768" observedRunningTime="2026-01-28 16:15:53.088964189 +0000 UTC m=+4364.541122440" watchObservedRunningTime="2026-01-28 16:15:53.095511611 +0000 UTC m=+4364.547669852" Jan 28 16:15:53 crc kubenswrapper[4981]: I0128 16:15:53.678987 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:53 crc kubenswrapper[4981]: I0128 16:15:53.679167 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:54 crc kubenswrapper[4981]: I0128 16:15:54.297702 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:55 crc kubenswrapper[4981]: I0128 16:15:55.140374 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:56 crc kubenswrapper[4981]: I0128 16:15:56.127983 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rrlqc"] Jan 28 16:15:56 crc kubenswrapper[4981]: I0128 16:15:56.876022 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:56 crc kubenswrapper[4981]: I0128 16:15:56.876505 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:56 crc kubenswrapper[4981]: I0128 16:15:56.924795 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:57 crc kubenswrapper[4981]: I0128 16:15:57.103330 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rrlqc" podUID="48f1e504-3786-4fd3-8b58-ed5e9b830b5e" containerName="registry-server" containerID="cri-o://3e9ec4d077352d6edae8ae17b2f9608df2e23c6406b367813377a051d4c062f3" gracePeriod=2 Jan 28 16:15:57 crc kubenswrapper[4981]: I0128 16:15:57.173818 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:57 crc kubenswrapper[4981]: I0128 16:15:57.586967 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:57 crc kubenswrapper[4981]: I0128 16:15:57.752088 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-utilities\") pod \"48f1e504-3786-4fd3-8b58-ed5e9b830b5e\" (UID: \"48f1e504-3786-4fd3-8b58-ed5e9b830b5e\") " Jan 28 16:15:57 crc kubenswrapper[4981]: I0128 16:15:57.752237 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-catalog-content\") pod \"48f1e504-3786-4fd3-8b58-ed5e9b830b5e\" (UID: \"48f1e504-3786-4fd3-8b58-ed5e9b830b5e\") " Jan 28 16:15:57 crc kubenswrapper[4981]: I0128 16:15:57.752468 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjj2z\" (UniqueName: \"kubernetes.io/projected/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-kube-api-access-fjj2z\") pod \"48f1e504-3786-4fd3-8b58-ed5e9b830b5e\" (UID: \"48f1e504-3786-4fd3-8b58-ed5e9b830b5e\") " Jan 28 16:15:57 crc kubenswrapper[4981]: I0128 16:15:57.753056 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-utilities" (OuterVolumeSpecName: "utilities") pod "48f1e504-3786-4fd3-8b58-ed5e9b830b5e" (UID: "48f1e504-3786-4fd3-8b58-ed5e9b830b5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:15:57 crc kubenswrapper[4981]: I0128 16:15:57.761393 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-kube-api-access-fjj2z" (OuterVolumeSpecName: "kube-api-access-fjj2z") pod "48f1e504-3786-4fd3-8b58-ed5e9b830b5e" (UID: "48f1e504-3786-4fd3-8b58-ed5e9b830b5e"). InnerVolumeSpecName "kube-api-access-fjj2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:15:57 crc kubenswrapper[4981]: I0128 16:15:57.856959 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:57 crc kubenswrapper[4981]: I0128 16:15:57.857011 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjj2z\" (UniqueName: \"kubernetes.io/projected/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-kube-api-access-fjj2z\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.116504 4981 generic.go:334] "Generic (PLEG): container finished" podID="48f1e504-3786-4fd3-8b58-ed5e9b830b5e" containerID="3e9ec4d077352d6edae8ae17b2f9608df2e23c6406b367813377a051d4c062f3" exitCode=0 Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.116561 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrlqc" event={"ID":"48f1e504-3786-4fd3-8b58-ed5e9b830b5e","Type":"ContainerDied","Data":"3e9ec4d077352d6edae8ae17b2f9608df2e23c6406b367813377a051d4c062f3"} Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.116607 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrlqc" event={"ID":"48f1e504-3786-4fd3-8b58-ed5e9b830b5e","Type":"ContainerDied","Data":"5f4fabf2a621b57c8041929d748ccea50b659baa72667f48e3ccb28a0b89a559"} Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.116632 4981 scope.go:117] "RemoveContainer" containerID="3e9ec4d077352d6edae8ae17b2f9608df2e23c6406b367813377a051d4c062f3" Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.116637 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrlqc" Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.148378 4981 scope.go:117] "RemoveContainer" containerID="d01822932cd534761f3ef6f5d49a71686cecce45d799b5f6320435540086ed41" Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.168304 4981 scope.go:117] "RemoveContainer" containerID="e7297545f06bc1ca85e0e6eea18dd74213cc0306973c864aa5790f6a264356b0" Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.232677 4981 scope.go:117] "RemoveContainer" containerID="3e9ec4d077352d6edae8ae17b2f9608df2e23c6406b367813377a051d4c062f3" Jan 28 16:15:58 crc kubenswrapper[4981]: E0128 16:15:58.233709 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e9ec4d077352d6edae8ae17b2f9608df2e23c6406b367813377a051d4c062f3\": container with ID starting with 3e9ec4d077352d6edae8ae17b2f9608df2e23c6406b367813377a051d4c062f3 not found: ID does not exist" containerID="3e9ec4d077352d6edae8ae17b2f9608df2e23c6406b367813377a051d4c062f3" Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.233746 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e9ec4d077352d6edae8ae17b2f9608df2e23c6406b367813377a051d4c062f3"} err="failed to get container status \"3e9ec4d077352d6edae8ae17b2f9608df2e23c6406b367813377a051d4c062f3\": rpc error: code = NotFound desc = could not find container \"3e9ec4d077352d6edae8ae17b2f9608df2e23c6406b367813377a051d4c062f3\": container with ID starting with 3e9ec4d077352d6edae8ae17b2f9608df2e23c6406b367813377a051d4c062f3 not found: ID does not exist" Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.233807 4981 scope.go:117] "RemoveContainer" containerID="d01822932cd534761f3ef6f5d49a71686cecce45d799b5f6320435540086ed41" Jan 28 16:15:58 crc kubenswrapper[4981]: E0128 16:15:58.235669 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d01822932cd534761f3ef6f5d49a71686cecce45d799b5f6320435540086ed41\": container with ID starting with d01822932cd534761f3ef6f5d49a71686cecce45d799b5f6320435540086ed41 not found: ID does not exist" containerID="d01822932cd534761f3ef6f5d49a71686cecce45d799b5f6320435540086ed41" Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.235717 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d01822932cd534761f3ef6f5d49a71686cecce45d799b5f6320435540086ed41"} err="failed to get container status \"d01822932cd534761f3ef6f5d49a71686cecce45d799b5f6320435540086ed41\": rpc error: code = NotFound desc = could not find container \"d01822932cd534761f3ef6f5d49a71686cecce45d799b5f6320435540086ed41\": container with ID starting with d01822932cd534761f3ef6f5d49a71686cecce45d799b5f6320435540086ed41 not found: ID does not exist" Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.235746 4981 scope.go:117] "RemoveContainer" containerID="e7297545f06bc1ca85e0e6eea18dd74213cc0306973c864aa5790f6a264356b0" Jan 28 16:15:58 crc kubenswrapper[4981]: E0128 16:15:58.236224 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7297545f06bc1ca85e0e6eea18dd74213cc0306973c864aa5790f6a264356b0\": container with ID starting with e7297545f06bc1ca85e0e6eea18dd74213cc0306973c864aa5790f6a264356b0 not found: ID does not exist" containerID="e7297545f06bc1ca85e0e6eea18dd74213cc0306973c864aa5790f6a264356b0" Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.236258 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7297545f06bc1ca85e0e6eea18dd74213cc0306973c864aa5790f6a264356b0"} err="failed to get container status \"e7297545f06bc1ca85e0e6eea18dd74213cc0306973c864aa5790f6a264356b0\": rpc error: code = NotFound desc = could not find container \"e7297545f06bc1ca85e0e6eea18dd74213cc0306973c864aa5790f6a264356b0\": container with ID starting with e7297545f06bc1ca85e0e6eea18dd74213cc0306973c864aa5790f6a264356b0 not found: ID does not exist" Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.485180 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48f1e504-3786-4fd3-8b58-ed5e9b830b5e" (UID: "48f1e504-3786-4fd3-8b58-ed5e9b830b5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.575624 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48f1e504-3786-4fd3-8b58-ed5e9b830b5e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.751610 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rrlqc"] Jan 28 16:15:58 crc kubenswrapper[4981]: I0128 16:15:58.772818 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rrlqc"] Jan 28 16:15:59 crc kubenswrapper[4981]: I0128 16:15:59.331056 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48f1e504-3786-4fd3-8b58-ed5e9b830b5e" path="/var/lib/kubelet/pods/48f1e504-3786-4fd3-8b58-ed5e9b830b5e/volumes" Jan 28 16:15:59 crc kubenswrapper[4981]: I0128 16:15:59.332469 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hgdvh"] Jan 28 16:15:59 crc kubenswrapper[4981]: I0128 16:15:59.332821 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hgdvh" podUID="5f773c23-5ad3-4f69-b968-66185e793bb7" containerName="registry-server" containerID="cri-o://c64e4eec96a7eadf10fe72bc7d9a494b8077ddcfcb75837a17919622a4c8c676" gracePeriod=2 Jan 28 16:15:59 crc kubenswrapper[4981]: I0128 16:15:59.827173 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:15:59 crc kubenswrapper[4981]: I0128 16:15:59.901906 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f773c23-5ad3-4f69-b968-66185e793bb7-utilities\") pod \"5f773c23-5ad3-4f69-b968-66185e793bb7\" (UID: \"5f773c23-5ad3-4f69-b968-66185e793bb7\") " Jan 28 16:15:59 crc kubenswrapper[4981]: I0128 16:15:59.902064 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqkxs\" (UniqueName: \"kubernetes.io/projected/5f773c23-5ad3-4f69-b968-66185e793bb7-kube-api-access-lqkxs\") pod \"5f773c23-5ad3-4f69-b968-66185e793bb7\" (UID: \"5f773c23-5ad3-4f69-b968-66185e793bb7\") " Jan 28 16:15:59 crc kubenswrapper[4981]: I0128 16:15:59.902239 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f773c23-5ad3-4f69-b968-66185e793bb7-catalog-content\") pod \"5f773c23-5ad3-4f69-b968-66185e793bb7\" (UID: \"5f773c23-5ad3-4f69-b968-66185e793bb7\") " Jan 28 16:15:59 crc kubenswrapper[4981]: I0128 16:15:59.903096 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f773c23-5ad3-4f69-b968-66185e793bb7-utilities" (OuterVolumeSpecName: "utilities") pod "5f773c23-5ad3-4f69-b968-66185e793bb7" (UID: "5f773c23-5ad3-4f69-b968-66185e793bb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:15:59 crc kubenswrapper[4981]: I0128 16:15:59.920897 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f773c23-5ad3-4f69-b968-66185e793bb7-kube-api-access-lqkxs" (OuterVolumeSpecName: "kube-api-access-lqkxs") pod "5f773c23-5ad3-4f69-b968-66185e793bb7" (UID: "5f773c23-5ad3-4f69-b968-66185e793bb7"). InnerVolumeSpecName "kube-api-access-lqkxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:15:59 crc kubenswrapper[4981]: I0128 16:15:59.927779 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f773c23-5ad3-4f69-b968-66185e793bb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f773c23-5ad3-4f69-b968-66185e793bb7" (UID: "5f773c23-5ad3-4f69-b968-66185e793bb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.004884 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f773c23-5ad3-4f69-b968-66185e793bb7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.004931 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f773c23-5ad3-4f69-b968-66185e793bb7-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.004945 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqkxs\" (UniqueName: \"kubernetes.io/projected/5f773c23-5ad3-4f69-b968-66185e793bb7-kube-api-access-lqkxs\") on node \"crc\" DevicePath \"\"" Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.138250 4981 generic.go:334] "Generic (PLEG): container finished" podID="5f773c23-5ad3-4f69-b968-66185e793bb7" containerID="c64e4eec96a7eadf10fe72bc7d9a494b8077ddcfcb75837a17919622a4c8c676" exitCode=0 Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.138308 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgdvh" event={"ID":"5f773c23-5ad3-4f69-b968-66185e793bb7","Type":"ContainerDied","Data":"c64e4eec96a7eadf10fe72bc7d9a494b8077ddcfcb75837a17919622a4c8c676"} Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.138343 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hgdvh" Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.138371 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hgdvh" event={"ID":"5f773c23-5ad3-4f69-b968-66185e793bb7","Type":"ContainerDied","Data":"44893b04d94ea067ca3cab28d625df45e024b38916698c8740f870fc011f756d"} Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.138397 4981 scope.go:117] "RemoveContainer" containerID="c64e4eec96a7eadf10fe72bc7d9a494b8077ddcfcb75837a17919622a4c8c676" Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.161118 4981 scope.go:117] "RemoveContainer" containerID="c516fdf633292db63e20ea69aae403b79744557e0388d1b2d14343b8e2ce2630" Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.193161 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hgdvh"] Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.197360 4981 scope.go:117] "RemoveContainer" containerID="a73de0fd5182a1a700f1aeac1f7bdd11dc8bee5e804dee84c2b74c5c021bfd29" Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.202399 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hgdvh"] Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.261770 4981 scope.go:117] "RemoveContainer" containerID="c64e4eec96a7eadf10fe72bc7d9a494b8077ddcfcb75837a17919622a4c8c676" Jan 28 16:16:00 crc kubenswrapper[4981]: E0128 16:16:00.262202 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c64e4eec96a7eadf10fe72bc7d9a494b8077ddcfcb75837a17919622a4c8c676\": container with ID starting with c64e4eec96a7eadf10fe72bc7d9a494b8077ddcfcb75837a17919622a4c8c676 not found: ID does not exist" containerID="c64e4eec96a7eadf10fe72bc7d9a494b8077ddcfcb75837a17919622a4c8c676" Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.262238 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c64e4eec96a7eadf10fe72bc7d9a494b8077ddcfcb75837a17919622a4c8c676"} err="failed to get container status \"c64e4eec96a7eadf10fe72bc7d9a494b8077ddcfcb75837a17919622a4c8c676\": rpc error: code = NotFound desc = could not find container \"c64e4eec96a7eadf10fe72bc7d9a494b8077ddcfcb75837a17919622a4c8c676\": container with ID starting with c64e4eec96a7eadf10fe72bc7d9a494b8077ddcfcb75837a17919622a4c8c676 not found: ID does not exist" Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.262263 4981 scope.go:117] "RemoveContainer" containerID="c516fdf633292db63e20ea69aae403b79744557e0388d1b2d14343b8e2ce2630" Jan 28 16:16:00 crc kubenswrapper[4981]: E0128 16:16:00.262644 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c516fdf633292db63e20ea69aae403b79744557e0388d1b2d14343b8e2ce2630\": container with ID starting with c516fdf633292db63e20ea69aae403b79744557e0388d1b2d14343b8e2ce2630 not found: ID does not exist" containerID="c516fdf633292db63e20ea69aae403b79744557e0388d1b2d14343b8e2ce2630" Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.262672 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c516fdf633292db63e20ea69aae403b79744557e0388d1b2d14343b8e2ce2630"} err="failed to get container status \"c516fdf633292db63e20ea69aae403b79744557e0388d1b2d14343b8e2ce2630\": rpc error: code = NotFound desc = could not find container \"c516fdf633292db63e20ea69aae403b79744557e0388d1b2d14343b8e2ce2630\": container with ID starting with c516fdf633292db63e20ea69aae403b79744557e0388d1b2d14343b8e2ce2630 not found: ID does not exist" Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.262686 4981 scope.go:117] "RemoveContainer" containerID="a73de0fd5182a1a700f1aeac1f7bdd11dc8bee5e804dee84c2b74c5c021bfd29" Jan 28 16:16:00 crc kubenswrapper[4981]: E0128 16:16:00.263028 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a73de0fd5182a1a700f1aeac1f7bdd11dc8bee5e804dee84c2b74c5c021bfd29\": container with ID starting with a73de0fd5182a1a700f1aeac1f7bdd11dc8bee5e804dee84c2b74c5c021bfd29 not found: ID does not exist" containerID="a73de0fd5182a1a700f1aeac1f7bdd11dc8bee5e804dee84c2b74c5c021bfd29" Jan 28 16:16:00 crc kubenswrapper[4981]: I0128 16:16:00.263048 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a73de0fd5182a1a700f1aeac1f7bdd11dc8bee5e804dee84c2b74c5c021bfd29"} err="failed to get container status \"a73de0fd5182a1a700f1aeac1f7bdd11dc8bee5e804dee84c2b74c5c021bfd29\": rpc error: code = NotFound desc = could not find container \"a73de0fd5182a1a700f1aeac1f7bdd11dc8bee5e804dee84c2b74c5c021bfd29\": container with ID starting with a73de0fd5182a1a700f1aeac1f7bdd11dc8bee5e804dee84c2b74c5c021bfd29 not found: ID does not exist" Jan 28 16:16:01 crc kubenswrapper[4981]: I0128 16:16:01.330596 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f773c23-5ad3-4f69-b968-66185e793bb7" path="/var/lib/kubelet/pods/5f773c23-5ad3-4f69-b968-66185e793bb7/volumes" Jan 28 16:16:17 crc kubenswrapper[4981]: I0128 16:16:17.325775 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rxw2q_de4f8121-43f6-4041-873d-2c13aca10ed9/kube-rbac-proxy/0.log" Jan 28 16:16:17 crc kubenswrapper[4981]: I0128 16:16:17.375928 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rxw2q_de4f8121-43f6-4041-873d-2c13aca10ed9/controller/0.log" Jan 28 16:16:17 crc kubenswrapper[4981]: I0128 16:16:17.424607 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-frr-files/0.log" Jan 28 16:16:17 crc kubenswrapper[4981]: I0128 16:16:17.637656 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-reloader/0.log" Jan 28 16:16:17 crc kubenswrapper[4981]: I0128 16:16:17.650887 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-frr-files/0.log" Jan 28 16:16:17 crc kubenswrapper[4981]: I0128 16:16:17.658539 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-metrics/0.log" Jan 28 16:16:17 crc kubenswrapper[4981]: I0128 16:16:17.680422 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-reloader/0.log" Jan 28 16:16:17 crc kubenswrapper[4981]: I0128 16:16:17.866359 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-reloader/0.log" Jan 28 16:16:17 crc kubenswrapper[4981]: I0128 16:16:17.872216 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-metrics/0.log" Jan 28 16:16:17 crc kubenswrapper[4981]: I0128 16:16:17.872393 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-frr-files/0.log" Jan 28 16:16:17 crc kubenswrapper[4981]: I0128 16:16:17.917278 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-metrics/0.log" Jan 28 16:16:18 crc kubenswrapper[4981]: I0128 16:16:18.082073 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-frr-files/0.log" Jan 28 16:16:18 crc kubenswrapper[4981]: I0128 16:16:18.084942 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-metrics/0.log" Jan 28 16:16:18 crc kubenswrapper[4981]: I0128 16:16:18.089961 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/cp-reloader/0.log" Jan 28 16:16:18 crc kubenswrapper[4981]: I0128 16:16:18.120687 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/controller/0.log" Jan 28 16:16:18 crc kubenswrapper[4981]: I0128 16:16:18.313008 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/kube-rbac-proxy-frr/0.log" Jan 28 16:16:18 crc kubenswrapper[4981]: I0128 16:16:18.337710 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/frr-metrics/0.log" Jan 28 16:16:18 crc kubenswrapper[4981]: I0128 16:16:18.363255 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/kube-rbac-proxy/0.log" Jan 28 16:16:18 crc kubenswrapper[4981]: I0128 16:16:18.599691 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-ksslx_c0df8723-60c7-4731-8420-e3279d5f1fce/frr-k8s-webhook-server/0.log" Jan 28 16:16:18 crc kubenswrapper[4981]: I0128 16:16:18.602513 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/reloader/0.log" Jan 28 16:16:18 crc kubenswrapper[4981]: I0128 16:16:18.888174 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-599f895949-pcz4b_0a0d4786-f200-41af-b16c-23528e0537dd/manager/0.log" Jan 28 16:16:19 crc kubenswrapper[4981]: I0128 16:16:19.126455 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-59465cf79b-kmxjc_bd39e6ba-068e-4ce1-936b-15b3c003cd04/webhook-server/0.log" Jan 28 16:16:19 crc kubenswrapper[4981]: I0128 16:16:19.180431 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-86r4q_50c5243a-5761-47df-b450-770a6522770c/kube-rbac-proxy/0.log" Jan 28 16:16:19 crc kubenswrapper[4981]: I0128 16:16:19.761055 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-86r4q_50c5243a-5761-47df-b450-770a6522770c/speaker/0.log" Jan 28 16:16:19 crc kubenswrapper[4981]: I0128 16:16:19.789801 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hvd88_2f8b9f36-0910-4437-b804-d62c58740667/frr/0.log" Jan 28 16:16:33 crc kubenswrapper[4981]: I0128 16:16:33.636700 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg_34033ece-4d02-4648-9025-0642096f42d3/util/0.log" Jan 28 16:16:34 crc kubenswrapper[4981]: I0128 16:16:34.203451 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg_34033ece-4d02-4648-9025-0642096f42d3/pull/0.log" Jan 28 16:16:34 crc kubenswrapper[4981]: I0128 16:16:34.232669 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg_34033ece-4d02-4648-9025-0642096f42d3/pull/0.log" Jan 28 16:16:34 crc kubenswrapper[4981]: I0128 16:16:34.257347 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg_34033ece-4d02-4648-9025-0642096f42d3/util/0.log" Jan 28 16:16:34 crc kubenswrapper[4981]: I0128 16:16:34.425954 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg_34033ece-4d02-4648-9025-0642096f42d3/pull/0.log" Jan 28 16:16:34 crc kubenswrapper[4981]: I0128 16:16:34.431141 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg_34033ece-4d02-4648-9025-0642096f42d3/util/0.log" Jan 28 16:16:34 crc kubenswrapper[4981]: I0128 16:16:34.456561 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcplmrg_34033ece-4d02-4648-9025-0642096f42d3/extract/0.log" Jan 28 16:16:34 crc kubenswrapper[4981]: I0128 16:16:34.591283 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl_35228b73-1ad1-4fa7-9470-ba0f42f71c3f/util/0.log" Jan 28 16:16:34 crc kubenswrapper[4981]: I0128 16:16:34.761414 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl_35228b73-1ad1-4fa7-9470-ba0f42f71c3f/pull/0.log" Jan 28 16:16:34 crc kubenswrapper[4981]: I0128 16:16:34.785862 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl_35228b73-1ad1-4fa7-9470-ba0f42f71c3f/util/0.log" Jan 28 16:16:34 crc kubenswrapper[4981]: I0128 16:16:34.786858 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl_35228b73-1ad1-4fa7-9470-ba0f42f71c3f/pull/0.log" Jan 28 16:16:34 crc kubenswrapper[4981]: I0128 16:16:34.951499 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl_35228b73-1ad1-4fa7-9470-ba0f42f71c3f/pull/0.log" Jan 28 16:16:34 crc kubenswrapper[4981]: I0128 16:16:34.967396 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl_35228b73-1ad1-4fa7-9470-ba0f42f71c3f/util/0.log" Jan 28 16:16:34 crc kubenswrapper[4981]: I0128 16:16:34.985011 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713g4vgl_35228b73-1ad1-4fa7-9470-ba0f42f71c3f/extract/0.log" Jan 28 16:16:35 crc kubenswrapper[4981]: I0128 16:16:35.102868 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bcv2k_386dca69-5d28-4d18-899e-7fd92d5eb6ad/extract-utilities/0.log" Jan 28 16:16:35 crc kubenswrapper[4981]: I0128 16:16:35.302928 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bcv2k_386dca69-5d28-4d18-899e-7fd92d5eb6ad/extract-utilities/0.log" Jan 28 16:16:35 crc kubenswrapper[4981]: I0128 16:16:35.919722 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bcv2k_386dca69-5d28-4d18-899e-7fd92d5eb6ad/extract-content/0.log" Jan 28 16:16:35 crc kubenswrapper[4981]: I0128 16:16:35.924085 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bcv2k_386dca69-5d28-4d18-899e-7fd92d5eb6ad/extract-content/0.log" Jan 28 16:16:36 crc kubenswrapper[4981]: I0128 16:16:36.090120 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bcv2k_386dca69-5d28-4d18-899e-7fd92d5eb6ad/extract-utilities/0.log" Jan 28 16:16:36 crc kubenswrapper[4981]: I0128 16:16:36.121303 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bcv2k_386dca69-5d28-4d18-899e-7fd92d5eb6ad/extract-content/0.log" Jan 28 16:16:36 crc kubenswrapper[4981]: I0128 16:16:36.278082 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6hpb_baebe073-e075-4f9f-98aa-d1fbe2e55934/extract-utilities/0.log" Jan 28 16:16:36 crc kubenswrapper[4981]: I0128 16:16:36.464237 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6hpb_baebe073-e075-4f9f-98aa-d1fbe2e55934/extract-content/0.log" Jan 28 16:16:36 crc kubenswrapper[4981]: I0128 16:16:36.561886 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6hpb_baebe073-e075-4f9f-98aa-d1fbe2e55934/extract-content/0.log" Jan 28 16:16:36 crc kubenswrapper[4981]: I0128 16:16:36.611819 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6hpb_baebe073-e075-4f9f-98aa-d1fbe2e55934/extract-utilities/0.log" Jan 28 16:16:36 crc kubenswrapper[4981]: I0128 16:16:36.792915 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6hpb_baebe073-e075-4f9f-98aa-d1fbe2e55934/extract-utilities/0.log" Jan 28 16:16:36 crc kubenswrapper[4981]: I0128 16:16:36.811061 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6hpb_baebe073-e075-4f9f-98aa-d1fbe2e55934/extract-content/0.log" Jan 28 16:16:36 crc kubenswrapper[4981]: I0128 16:16:36.976462 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bcv2k_386dca69-5d28-4d18-899e-7fd92d5eb6ad/registry-server/0.log" Jan 28 16:16:37 crc kubenswrapper[4981]: I0128 16:16:37.086553 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-49xjs_8b6270bb-35bf-4292-b065-b6572531a590/marketplace-operator/0.log" Jan 28 16:16:37 crc kubenswrapper[4981]: I0128 16:16:37.209833 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-njnj4_b6349813-dabe-4141-9e83-8d8a99458444/extract-utilities/0.log" Jan 28 16:16:37 crc kubenswrapper[4981]: I0128 16:16:37.465485 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-njnj4_b6349813-dabe-4141-9e83-8d8a99458444/extract-utilities/0.log" Jan 28 16:16:37 crc kubenswrapper[4981]: I0128 16:16:37.479171 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-njnj4_b6349813-dabe-4141-9e83-8d8a99458444/extract-content/0.log" Jan 28 16:16:37 crc kubenswrapper[4981]: I0128 16:16:37.513452 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-c6hpb_baebe073-e075-4f9f-98aa-d1fbe2e55934/registry-server/0.log" Jan 28 16:16:37 crc kubenswrapper[4981]: I0128 16:16:37.524638 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-njnj4_b6349813-dabe-4141-9e83-8d8a99458444/extract-content/0.log" Jan 28 16:16:37 crc kubenswrapper[4981]: I0128 16:16:37.692690 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-njnj4_b6349813-dabe-4141-9e83-8d8a99458444/extract-content/0.log" Jan 28 16:16:37 crc kubenswrapper[4981]: I0128 16:16:37.728550 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-njnj4_b6349813-dabe-4141-9e83-8d8a99458444/extract-utilities/0.log" Jan 28 16:16:37 crc kubenswrapper[4981]: I0128 16:16:37.743029 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wl8c9_f6175e49-f0fc-41d3-b750-e9e1b6cbca02/extract-utilities/0.log" Jan 28 16:16:37 crc kubenswrapper[4981]: I0128 16:16:37.844093 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-njnj4_b6349813-dabe-4141-9e83-8d8a99458444/registry-server/0.log" Jan 28 16:16:37 crc kubenswrapper[4981]: I0128 16:16:37.903124 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wl8c9_f6175e49-f0fc-41d3-b750-e9e1b6cbca02/extract-utilities/0.log" Jan 28 16:16:37 crc kubenswrapper[4981]: I0128 16:16:37.910404 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wl8c9_f6175e49-f0fc-41d3-b750-e9e1b6cbca02/extract-content/0.log" Jan 28 16:16:37 crc kubenswrapper[4981]: I0128 16:16:37.937697 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wl8c9_f6175e49-f0fc-41d3-b750-e9e1b6cbca02/extract-content/0.log" Jan 28 16:16:38 crc kubenswrapper[4981]: I0128 16:16:38.069976 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wl8c9_f6175e49-f0fc-41d3-b750-e9e1b6cbca02/extract-utilities/0.log" Jan 28 16:16:38 crc kubenswrapper[4981]: I0128 16:16:38.091538 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wl8c9_f6175e49-f0fc-41d3-b750-e9e1b6cbca02/extract-content/0.log" Jan 28 16:16:38 crc kubenswrapper[4981]: I0128 16:16:38.263155 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wl8c9_f6175e49-f0fc-41d3-b750-e9e1b6cbca02/registry-server/0.log" Jan 28 16:16:58 crc kubenswrapper[4981]: E0128 16:16:58.046240 4981 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.151:46854->38.102.83.151:33457: read tcp 38.102.83.151:46854->38.102.83.151:33457: read: connection reset by peer Jan 28 16:18:19 crc kubenswrapper[4981]: I0128 16:18:19.897432 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:18:19 crc kubenswrapper[4981]: I0128 16:18:19.899168 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:18:25 crc kubenswrapper[4981]: I0128 16:18:25.466641 4981 generic.go:334] "Generic (PLEG): container finished" podID="7179bcf6-341a-49a0-b2f8-60e6f5833dfd" containerID="847d1a516e064a00c2f2cfaddf7caf936d0686b794734b748cedf4bbad8a638a" exitCode=0 Jan 28 16:18:25 crc kubenswrapper[4981]: I0128 16:18:25.466725 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-tzkv5/must-gather-dn99r" event={"ID":"7179bcf6-341a-49a0-b2f8-60e6f5833dfd","Type":"ContainerDied","Data":"847d1a516e064a00c2f2cfaddf7caf936d0686b794734b748cedf4bbad8a638a"} Jan 28 16:18:25 crc kubenswrapper[4981]: I0128 16:18:25.467795 4981 scope.go:117] "RemoveContainer" containerID="847d1a516e064a00c2f2cfaddf7caf936d0686b794734b748cedf4bbad8a638a" Jan 28 16:18:25 crc kubenswrapper[4981]: I0128 16:18:25.983308 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tzkv5_must-gather-dn99r_7179bcf6-341a-49a0-b2f8-60e6f5833dfd/gather/0.log" Jan 28 16:18:36 crc kubenswrapper[4981]: I0128 16:18:36.624904 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-tzkv5/must-gather-dn99r"] Jan 28 16:18:36 crc kubenswrapper[4981]: I0128 16:18:36.625671 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-tzkv5/must-gather-dn99r" podUID="7179bcf6-341a-49a0-b2f8-60e6f5833dfd" containerName="copy" containerID="cri-o://7c9ca56d45f40189a00095dfb5767b54644eb55d6a7e46b713ff5bc67a49b1d7" gracePeriod=2 Jan 28 16:18:36 crc kubenswrapper[4981]: I0128 16:18:36.635707 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-tzkv5/must-gather-dn99r"] Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.077910 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tzkv5_must-gather-dn99r_7179bcf6-341a-49a0-b2f8-60e6f5833dfd/copy/0.log" Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.078655 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/must-gather-dn99r" Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.225105 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7179bcf6-341a-49a0-b2f8-60e6f5833dfd-must-gather-output\") pod \"7179bcf6-341a-49a0-b2f8-60e6f5833dfd\" (UID: \"7179bcf6-341a-49a0-b2f8-60e6f5833dfd\") " Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.225407 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lmkq\" (UniqueName: \"kubernetes.io/projected/7179bcf6-341a-49a0-b2f8-60e6f5833dfd-kube-api-access-2lmkq\") pod \"7179bcf6-341a-49a0-b2f8-60e6f5833dfd\" (UID: \"7179bcf6-341a-49a0-b2f8-60e6f5833dfd\") " Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.230775 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7179bcf6-341a-49a0-b2f8-60e6f5833dfd-kube-api-access-2lmkq" (OuterVolumeSpecName: "kube-api-access-2lmkq") pod "7179bcf6-341a-49a0-b2f8-60e6f5833dfd" (UID: "7179bcf6-341a-49a0-b2f8-60e6f5833dfd"). InnerVolumeSpecName "kube-api-access-2lmkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.329856 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lmkq\" (UniqueName: \"kubernetes.io/projected/7179bcf6-341a-49a0-b2f8-60e6f5833dfd-kube-api-access-2lmkq\") on node \"crc\" DevicePath \"\"" Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.375585 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7179bcf6-341a-49a0-b2f8-60e6f5833dfd-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "7179bcf6-341a-49a0-b2f8-60e6f5833dfd" (UID: "7179bcf6-341a-49a0-b2f8-60e6f5833dfd"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.431545 4981 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/7179bcf6-341a-49a0-b2f8-60e6f5833dfd-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.576242 4981 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-tzkv5_must-gather-dn99r_7179bcf6-341a-49a0-b2f8-60e6f5833dfd/copy/0.log" Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.576617 4981 generic.go:334] "Generic (PLEG): container finished" podID="7179bcf6-341a-49a0-b2f8-60e6f5833dfd" containerID="7c9ca56d45f40189a00095dfb5767b54644eb55d6a7e46b713ff5bc67a49b1d7" exitCode=143 Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.576671 4981 scope.go:117] "RemoveContainer" containerID="7c9ca56d45f40189a00095dfb5767b54644eb55d6a7e46b713ff5bc67a49b1d7" Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.576726 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-tzkv5/must-gather-dn99r" Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.606445 4981 scope.go:117] "RemoveContainer" containerID="847d1a516e064a00c2f2cfaddf7caf936d0686b794734b748cedf4bbad8a638a" Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.686166 4981 scope.go:117] "RemoveContainer" containerID="7c9ca56d45f40189a00095dfb5767b54644eb55d6a7e46b713ff5bc67a49b1d7" Jan 28 16:18:37 crc kubenswrapper[4981]: E0128 16:18:37.686631 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c9ca56d45f40189a00095dfb5767b54644eb55d6a7e46b713ff5bc67a49b1d7\": container with ID starting with 7c9ca56d45f40189a00095dfb5767b54644eb55d6a7e46b713ff5bc67a49b1d7 not found: ID does not exist" containerID="7c9ca56d45f40189a00095dfb5767b54644eb55d6a7e46b713ff5bc67a49b1d7" Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.686661 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c9ca56d45f40189a00095dfb5767b54644eb55d6a7e46b713ff5bc67a49b1d7"} err="failed to get container status \"7c9ca56d45f40189a00095dfb5767b54644eb55d6a7e46b713ff5bc67a49b1d7\": rpc error: code = NotFound desc = could not find container \"7c9ca56d45f40189a00095dfb5767b54644eb55d6a7e46b713ff5bc67a49b1d7\": container with ID starting with 7c9ca56d45f40189a00095dfb5767b54644eb55d6a7e46b713ff5bc67a49b1d7 not found: ID does not exist" Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.686685 4981 scope.go:117] "RemoveContainer" containerID="847d1a516e064a00c2f2cfaddf7caf936d0686b794734b748cedf4bbad8a638a" Jan 28 16:18:37 crc kubenswrapper[4981]: E0128 16:18:37.686899 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"847d1a516e064a00c2f2cfaddf7caf936d0686b794734b748cedf4bbad8a638a\": container with ID starting with 847d1a516e064a00c2f2cfaddf7caf936d0686b794734b748cedf4bbad8a638a not found: ID does not exist" containerID="847d1a516e064a00c2f2cfaddf7caf936d0686b794734b748cedf4bbad8a638a" Jan 28 16:18:37 crc kubenswrapper[4981]: I0128 16:18:37.686922 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"847d1a516e064a00c2f2cfaddf7caf936d0686b794734b748cedf4bbad8a638a"} err="failed to get container status \"847d1a516e064a00c2f2cfaddf7caf936d0686b794734b748cedf4bbad8a638a\": rpc error: code = NotFound desc = could not find container \"847d1a516e064a00c2f2cfaddf7caf936d0686b794734b748cedf4bbad8a638a\": container with ID starting with 847d1a516e064a00c2f2cfaddf7caf936d0686b794734b748cedf4bbad8a638a not found: ID does not exist" Jan 28 16:18:39 crc kubenswrapper[4981]: I0128 16:18:39.332893 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7179bcf6-341a-49a0-b2f8-60e6f5833dfd" path="/var/lib/kubelet/pods/7179bcf6-341a-49a0-b2f8-60e6f5833dfd/volumes" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.460824 4981 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-svvdc"] Jan 28 16:18:45 crc kubenswrapper[4981]: E0128 16:18:45.461758 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f1e504-3786-4fd3-8b58-ed5e9b830b5e" containerName="registry-server" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.461794 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f1e504-3786-4fd3-8b58-ed5e9b830b5e" containerName="registry-server" Jan 28 16:18:45 crc kubenswrapper[4981]: E0128 16:18:45.461815 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7179bcf6-341a-49a0-b2f8-60e6f5833dfd" containerName="gather" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.461825 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7179bcf6-341a-49a0-b2f8-60e6f5833dfd" containerName="gather" Jan 28 16:18:45 crc kubenswrapper[4981]: E0128 16:18:45.461839 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f773c23-5ad3-4f69-b968-66185e793bb7" containerName="registry-server" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.461847 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f773c23-5ad3-4f69-b968-66185e793bb7" containerName="registry-server" Jan 28 16:18:45 crc kubenswrapper[4981]: E0128 16:18:45.461863 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f1e504-3786-4fd3-8b58-ed5e9b830b5e" containerName="extract-utilities" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.461871 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f1e504-3786-4fd3-8b58-ed5e9b830b5e" containerName="extract-utilities" Jan 28 16:18:45 crc kubenswrapper[4981]: E0128 16:18:45.461900 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f773c23-5ad3-4f69-b968-66185e793bb7" containerName="extract-utilities" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.461909 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f773c23-5ad3-4f69-b968-66185e793bb7" containerName="extract-utilities" Jan 28 16:18:45 crc kubenswrapper[4981]: E0128 16:18:45.461924 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f773c23-5ad3-4f69-b968-66185e793bb7" containerName="extract-content" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.461931 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f773c23-5ad3-4f69-b968-66185e793bb7" containerName="extract-content" Jan 28 16:18:45 crc kubenswrapper[4981]: E0128 16:18:45.461946 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48f1e504-3786-4fd3-8b58-ed5e9b830b5e" containerName="extract-content" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.461953 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="48f1e504-3786-4fd3-8b58-ed5e9b830b5e" containerName="extract-content" Jan 28 16:18:45 crc kubenswrapper[4981]: E0128 16:18:45.461971 4981 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7179bcf6-341a-49a0-b2f8-60e6f5833dfd" containerName="copy" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.461979 4981 state_mem.go:107] "Deleted CPUSet assignment" podUID="7179bcf6-341a-49a0-b2f8-60e6f5833dfd" containerName="copy" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.462171 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="48f1e504-3786-4fd3-8b58-ed5e9b830b5e" containerName="registry-server" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.462210 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="7179bcf6-341a-49a0-b2f8-60e6f5833dfd" containerName="copy" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.462230 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="7179bcf6-341a-49a0-b2f8-60e6f5833dfd" containerName="gather" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.462252 4981 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f773c23-5ad3-4f69-b968-66185e793bb7" containerName="registry-server" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.468382 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.490989 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-svvdc"] Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.580444 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-catalog-content\") pod \"redhat-operators-svvdc\" (UID: \"7c5b2dff-4f9d-43db-bfb4-7a67486243a9\") " pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.580764 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8lrh\" (UniqueName: \"kubernetes.io/projected/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-kube-api-access-f8lrh\") pod \"redhat-operators-svvdc\" (UID: \"7c5b2dff-4f9d-43db-bfb4-7a67486243a9\") " pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.581116 4981 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-utilities\") pod \"redhat-operators-svvdc\" (UID: \"7c5b2dff-4f9d-43db-bfb4-7a67486243a9\") " pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.682648 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-catalog-content\") pod \"redhat-operators-svvdc\" (UID: \"7c5b2dff-4f9d-43db-bfb4-7a67486243a9\") " pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.682716 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8lrh\" (UniqueName: \"kubernetes.io/projected/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-kube-api-access-f8lrh\") pod \"redhat-operators-svvdc\" (UID: \"7c5b2dff-4f9d-43db-bfb4-7a67486243a9\") " pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.682754 4981 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-utilities\") pod \"redhat-operators-svvdc\" (UID: \"7c5b2dff-4f9d-43db-bfb4-7a67486243a9\") " pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.683222 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-catalog-content\") pod \"redhat-operators-svvdc\" (UID: \"7c5b2dff-4f9d-43db-bfb4-7a67486243a9\") " pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.683233 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-utilities\") pod \"redhat-operators-svvdc\" (UID: \"7c5b2dff-4f9d-43db-bfb4-7a67486243a9\") " pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.704210 4981 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8lrh\" (UniqueName: \"kubernetes.io/projected/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-kube-api-access-f8lrh\") pod \"redhat-operators-svvdc\" (UID: \"7c5b2dff-4f9d-43db-bfb4-7a67486243a9\") " pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:18:45 crc kubenswrapper[4981]: I0128 16:18:45.824706 4981 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:18:46 crc kubenswrapper[4981]: I0128 16:18:46.323520 4981 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-svvdc"] Jan 28 16:18:46 crc kubenswrapper[4981]: I0128 16:18:46.651116 4981 generic.go:334] "Generic (PLEG): container finished" podID="7c5b2dff-4f9d-43db-bfb4-7a67486243a9" containerID="763a39c0d87c367c59750e9690ee1616bd4413340458d8a73bbcc2e04e9ca2dc" exitCode=0 Jan 28 16:18:46 crc kubenswrapper[4981]: I0128 16:18:46.651160 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svvdc" event={"ID":"7c5b2dff-4f9d-43db-bfb4-7a67486243a9","Type":"ContainerDied","Data":"763a39c0d87c367c59750e9690ee1616bd4413340458d8a73bbcc2e04e9ca2dc"} Jan 28 16:18:46 crc kubenswrapper[4981]: I0128 16:18:46.651205 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svvdc" event={"ID":"7c5b2dff-4f9d-43db-bfb4-7a67486243a9","Type":"ContainerStarted","Data":"22cadd3b3eef9cb3fb050a4313735b3a5bfa3955c2120552342ce093a0d4b7a0"} Jan 28 16:18:46 crc kubenswrapper[4981]: I0128 16:18:46.652947 4981 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 16:18:47 crc kubenswrapper[4981]: I0128 16:18:47.662300 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svvdc" event={"ID":"7c5b2dff-4f9d-43db-bfb4-7a67486243a9","Type":"ContainerStarted","Data":"29064b07555b9e1b99f99a52b32228b148a773af1a895f8497cd639f763222a9"} Jan 28 16:18:49 crc kubenswrapper[4981]: I0128 16:18:49.897797 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:18:49 crc kubenswrapper[4981]: I0128 16:18:49.898270 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:18:52 crc kubenswrapper[4981]: I0128 16:18:52.706846 4981 generic.go:334] "Generic (PLEG): container finished" podID="7c5b2dff-4f9d-43db-bfb4-7a67486243a9" containerID="29064b07555b9e1b99f99a52b32228b148a773af1a895f8497cd639f763222a9" exitCode=0 Jan 28 16:18:52 crc kubenswrapper[4981]: I0128 16:18:52.706885 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svvdc" event={"ID":"7c5b2dff-4f9d-43db-bfb4-7a67486243a9","Type":"ContainerDied","Data":"29064b07555b9e1b99f99a52b32228b148a773af1a895f8497cd639f763222a9"} Jan 28 16:18:55 crc kubenswrapper[4981]: I0128 16:18:55.737657 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svvdc" event={"ID":"7c5b2dff-4f9d-43db-bfb4-7a67486243a9","Type":"ContainerStarted","Data":"8ad3aa51acac643e3d5b8cae105b76a7fae5d9d4c3329b2ff59adfddfd1f493b"} Jan 28 16:18:55 crc kubenswrapper[4981]: I0128 16:18:55.764147 4981 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-svvdc" podStartSLOduration=3.040234711 podStartE2EDuration="10.764116076s" podCreationTimestamp="2026-01-28 16:18:45 +0000 UTC" firstStartedPulling="2026-01-28 16:18:46.652671847 +0000 UTC m=+4538.104830088" lastFinishedPulling="2026-01-28 16:18:54.376522501 +0000 UTC m=+4545.828711453" observedRunningTime="2026-01-28 16:18:55.753149017 +0000 UTC m=+4547.205307258" watchObservedRunningTime="2026-01-28 16:18:55.764116076 +0000 UTC m=+4547.216274317" Jan 28 16:18:55 crc kubenswrapper[4981]: I0128 16:18:55.825171 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:18:55 crc kubenswrapper[4981]: I0128 16:18:55.878512 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:18:56 crc kubenswrapper[4981]: I0128 16:18:56.932817 4981 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-svvdc" podUID="7c5b2dff-4f9d-43db-bfb4-7a67486243a9" containerName="registry-server" probeResult="failure" output=< Jan 28 16:18:56 crc kubenswrapper[4981]: timeout: failed to connect service ":50051" within 1s Jan 28 16:18:56 crc kubenswrapper[4981]: > Jan 28 16:19:06 crc kubenswrapper[4981]: I0128 16:19:06.203794 4981 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:19:06 crc kubenswrapper[4981]: I0128 16:19:06.255592 4981 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:19:06 crc kubenswrapper[4981]: I0128 16:19:06.445851 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-svvdc"] Jan 28 16:19:07 crc kubenswrapper[4981]: I0128 16:19:07.848823 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-svvdc" podUID="7c5b2dff-4f9d-43db-bfb4-7a67486243a9" containerName="registry-server" containerID="cri-o://8ad3aa51acac643e3d5b8cae105b76a7fae5d9d4c3329b2ff59adfddfd1f493b" gracePeriod=2 Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.351066 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.541834 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8lrh\" (UniqueName: \"kubernetes.io/projected/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-kube-api-access-f8lrh\") pod \"7c5b2dff-4f9d-43db-bfb4-7a67486243a9\" (UID: \"7c5b2dff-4f9d-43db-bfb4-7a67486243a9\") " Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.542311 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-utilities\") pod \"7c5b2dff-4f9d-43db-bfb4-7a67486243a9\" (UID: \"7c5b2dff-4f9d-43db-bfb4-7a67486243a9\") " Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.542535 4981 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-catalog-content\") pod \"7c5b2dff-4f9d-43db-bfb4-7a67486243a9\" (UID: \"7c5b2dff-4f9d-43db-bfb4-7a67486243a9\") " Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.543609 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-utilities" (OuterVolumeSpecName: "utilities") pod "7c5b2dff-4f9d-43db-bfb4-7a67486243a9" (UID: "7c5b2dff-4f9d-43db-bfb4-7a67486243a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.548485 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-kube-api-access-f8lrh" (OuterVolumeSpecName: "kube-api-access-f8lrh") pod "7c5b2dff-4f9d-43db-bfb4-7a67486243a9" (UID: "7c5b2dff-4f9d-43db-bfb4-7a67486243a9"). InnerVolumeSpecName "kube-api-access-f8lrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.645178 4981 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.645638 4981 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8lrh\" (UniqueName: \"kubernetes.io/projected/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-kube-api-access-f8lrh\") on node \"crc\" DevicePath \"\"" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.673498 4981 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c5b2dff-4f9d-43db-bfb4-7a67486243a9" (UID: "7c5b2dff-4f9d-43db-bfb4-7a67486243a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.747806 4981 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c5b2dff-4f9d-43db-bfb4-7a67486243a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.858969 4981 generic.go:334] "Generic (PLEG): container finished" podID="7c5b2dff-4f9d-43db-bfb4-7a67486243a9" containerID="8ad3aa51acac643e3d5b8cae105b76a7fae5d9d4c3329b2ff59adfddfd1f493b" exitCode=0 Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.859061 4981 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svvdc" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.859028 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svvdc" event={"ID":"7c5b2dff-4f9d-43db-bfb4-7a67486243a9","Type":"ContainerDied","Data":"8ad3aa51acac643e3d5b8cae105b76a7fae5d9d4c3329b2ff59adfddfd1f493b"} Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.859271 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svvdc" event={"ID":"7c5b2dff-4f9d-43db-bfb4-7a67486243a9","Type":"ContainerDied","Data":"22cadd3b3eef9cb3fb050a4313735b3a5bfa3955c2120552342ce093a0d4b7a0"} Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.859297 4981 scope.go:117] "RemoveContainer" containerID="8ad3aa51acac643e3d5b8cae105b76a7fae5d9d4c3329b2ff59adfddfd1f493b" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.889268 4981 scope.go:117] "RemoveContainer" containerID="29064b07555b9e1b99f99a52b32228b148a773af1a895f8497cd639f763222a9" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.898313 4981 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-svvdc"] Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.910521 4981 scope.go:117] "RemoveContainer" containerID="763a39c0d87c367c59750e9690ee1616bd4413340458d8a73bbcc2e04e9ca2dc" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.915567 4981 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-svvdc"] Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.954648 4981 scope.go:117] "RemoveContainer" containerID="8ad3aa51acac643e3d5b8cae105b76a7fae5d9d4c3329b2ff59adfddfd1f493b" Jan 28 16:19:08 crc kubenswrapper[4981]: E0128 16:19:08.955149 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ad3aa51acac643e3d5b8cae105b76a7fae5d9d4c3329b2ff59adfddfd1f493b\": container with ID starting with 8ad3aa51acac643e3d5b8cae105b76a7fae5d9d4c3329b2ff59adfddfd1f493b not found: ID does not exist" containerID="8ad3aa51acac643e3d5b8cae105b76a7fae5d9d4c3329b2ff59adfddfd1f493b" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.955260 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ad3aa51acac643e3d5b8cae105b76a7fae5d9d4c3329b2ff59adfddfd1f493b"} err="failed to get container status \"8ad3aa51acac643e3d5b8cae105b76a7fae5d9d4c3329b2ff59adfddfd1f493b\": rpc error: code = NotFound desc = could not find container \"8ad3aa51acac643e3d5b8cae105b76a7fae5d9d4c3329b2ff59adfddfd1f493b\": container with ID starting with 8ad3aa51acac643e3d5b8cae105b76a7fae5d9d4c3329b2ff59adfddfd1f493b not found: ID does not exist" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.955290 4981 scope.go:117] "RemoveContainer" containerID="29064b07555b9e1b99f99a52b32228b148a773af1a895f8497cd639f763222a9" Jan 28 16:19:08 crc kubenswrapper[4981]: E0128 16:19:08.955572 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29064b07555b9e1b99f99a52b32228b148a773af1a895f8497cd639f763222a9\": container with ID starting with 29064b07555b9e1b99f99a52b32228b148a773af1a895f8497cd639f763222a9 not found: ID does not exist" containerID="29064b07555b9e1b99f99a52b32228b148a773af1a895f8497cd639f763222a9" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.955602 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29064b07555b9e1b99f99a52b32228b148a773af1a895f8497cd639f763222a9"} err="failed to get container status \"29064b07555b9e1b99f99a52b32228b148a773af1a895f8497cd639f763222a9\": rpc error: code = NotFound desc = could not find container \"29064b07555b9e1b99f99a52b32228b148a773af1a895f8497cd639f763222a9\": container with ID starting with 29064b07555b9e1b99f99a52b32228b148a773af1a895f8497cd639f763222a9 not found: ID does not exist" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.955620 4981 scope.go:117] "RemoveContainer" containerID="763a39c0d87c367c59750e9690ee1616bd4413340458d8a73bbcc2e04e9ca2dc" Jan 28 16:19:08 crc kubenswrapper[4981]: E0128 16:19:08.955895 4981 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"763a39c0d87c367c59750e9690ee1616bd4413340458d8a73bbcc2e04e9ca2dc\": container with ID starting with 763a39c0d87c367c59750e9690ee1616bd4413340458d8a73bbcc2e04e9ca2dc not found: ID does not exist" containerID="763a39c0d87c367c59750e9690ee1616bd4413340458d8a73bbcc2e04e9ca2dc" Jan 28 16:19:08 crc kubenswrapper[4981]: I0128 16:19:08.955928 4981 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"763a39c0d87c367c59750e9690ee1616bd4413340458d8a73bbcc2e04e9ca2dc"} err="failed to get container status \"763a39c0d87c367c59750e9690ee1616bd4413340458d8a73bbcc2e04e9ca2dc\": rpc error: code = NotFound desc = could not find container \"763a39c0d87c367c59750e9690ee1616bd4413340458d8a73bbcc2e04e9ca2dc\": container with ID starting with 763a39c0d87c367c59750e9690ee1616bd4413340458d8a73bbcc2e04e9ca2dc not found: ID does not exist" Jan 28 16:19:09 crc kubenswrapper[4981]: I0128 16:19:09.339549 4981 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c5b2dff-4f9d-43db-bfb4-7a67486243a9" path="/var/lib/kubelet/pods/7c5b2dff-4f9d-43db-bfb4-7a67486243a9/volumes" Jan 28 16:19:19 crc kubenswrapper[4981]: I0128 16:19:19.898178 4981 patch_prober.go:28] interesting pod/machine-config-daemon-rcgbx container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 16:19:19 crc kubenswrapper[4981]: I0128 16:19:19.898747 4981 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 16:19:19 crc kubenswrapper[4981]: I0128 16:19:19.898790 4981 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" Jan 28 16:19:19 crc kubenswrapper[4981]: I0128 16:19:19.899467 4981 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f760dbb0a2aa53beaa9da2cbd8b1ed868e5b38df09765a2c7d270efdc3c9fe7e"} pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 16:19:19 crc kubenswrapper[4981]: I0128 16:19:19.899519 4981 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" containerName="machine-config-daemon" containerID="cri-o://f760dbb0a2aa53beaa9da2cbd8b1ed868e5b38df09765a2c7d270efdc3c9fe7e" gracePeriod=600 Jan 28 16:19:20 crc kubenswrapper[4981]: E0128 16:19:20.286518 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:19:20 crc kubenswrapper[4981]: I0128 16:19:20.968114 4981 generic.go:334] "Generic (PLEG): container finished" podID="67525d77-715e-4ec3-bdbb-6854657355c0" containerID="f760dbb0a2aa53beaa9da2cbd8b1ed868e5b38df09765a2c7d270efdc3c9fe7e" exitCode=0 Jan 28 16:19:20 crc kubenswrapper[4981]: I0128 16:19:20.968403 4981 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" event={"ID":"67525d77-715e-4ec3-bdbb-6854657355c0","Type":"ContainerDied","Data":"f760dbb0a2aa53beaa9da2cbd8b1ed868e5b38df09765a2c7d270efdc3c9fe7e"} Jan 28 16:19:20 crc kubenswrapper[4981]: I0128 16:19:20.968441 4981 scope.go:117] "RemoveContainer" containerID="4ab0a2849dc2e00c4edd50099e12dfbd084b3aa7c1423fcb7eb0555a7c8c82d4" Jan 28 16:19:20 crc kubenswrapper[4981]: I0128 16:19:20.969181 4981 scope.go:117] "RemoveContainer" containerID="f760dbb0a2aa53beaa9da2cbd8b1ed868e5b38df09765a2c7d270efdc3c9fe7e" Jan 28 16:19:20 crc kubenswrapper[4981]: E0128 16:19:20.969434 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:19:34 crc kubenswrapper[4981]: I0128 16:19:34.320461 4981 scope.go:117] "RemoveContainer" containerID="f760dbb0a2aa53beaa9da2cbd8b1ed868e5b38df09765a2c7d270efdc3c9fe7e" Jan 28 16:19:34 crc kubenswrapper[4981]: E0128 16:19:34.321677 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:19:49 crc kubenswrapper[4981]: I0128 16:19:49.326096 4981 scope.go:117] "RemoveContainer" containerID="f760dbb0a2aa53beaa9da2cbd8b1ed868e5b38df09765a2c7d270efdc3c9fe7e" Jan 28 16:19:49 crc kubenswrapper[4981]: E0128 16:19:49.326979 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:20:02 crc kubenswrapper[4981]: I0128 16:20:02.318465 4981 scope.go:117] "RemoveContainer" containerID="f760dbb0a2aa53beaa9da2cbd8b1ed868e5b38df09765a2c7d270efdc3c9fe7e" Jan 28 16:20:02 crc kubenswrapper[4981]: E0128 16:20:02.319288 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:20:14 crc kubenswrapper[4981]: I0128 16:20:14.318874 4981 scope.go:117] "RemoveContainer" containerID="f760dbb0a2aa53beaa9da2cbd8b1ed868e5b38df09765a2c7d270efdc3c9fe7e" Jan 28 16:20:14 crc kubenswrapper[4981]: E0128 16:20:14.319803 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:20:28 crc kubenswrapper[4981]: I0128 16:20:28.319745 4981 scope.go:117] "RemoveContainer" containerID="f760dbb0a2aa53beaa9da2cbd8b1ed868e5b38df09765a2c7d270efdc3c9fe7e" Jan 28 16:20:28 crc kubenswrapper[4981]: E0128 16:20:28.321121 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:20:40 crc kubenswrapper[4981]: I0128 16:20:40.319073 4981 scope.go:117] "RemoveContainer" containerID="f760dbb0a2aa53beaa9da2cbd8b1ed868e5b38df09765a2c7d270efdc3c9fe7e" Jan 28 16:20:40 crc kubenswrapper[4981]: E0128 16:20:40.321921 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:20:55 crc kubenswrapper[4981]: I0128 16:20:55.319160 4981 scope.go:117] "RemoveContainer" containerID="f760dbb0a2aa53beaa9da2cbd8b1ed868e5b38df09765a2c7d270efdc3c9fe7e" Jan 28 16:20:55 crc kubenswrapper[4981]: E0128 16:20:55.319910 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:21:09 crc kubenswrapper[4981]: I0128 16:21:09.334218 4981 scope.go:117] "RemoveContainer" containerID="f760dbb0a2aa53beaa9da2cbd8b1ed868e5b38df09765a2c7d270efdc3c9fe7e" Jan 28 16:21:09 crc kubenswrapper[4981]: E0128 16:21:09.368781 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0" Jan 28 16:21:24 crc kubenswrapper[4981]: I0128 16:21:24.319038 4981 scope.go:117] "RemoveContainer" containerID="f760dbb0a2aa53beaa9da2cbd8b1ed868e5b38df09765a2c7d270efdc3c9fe7e" Jan 28 16:21:24 crc kubenswrapper[4981]: E0128 16:21:24.319900 4981 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rcgbx_openshift-machine-config-operator(67525d77-715e-4ec3-bdbb-6854657355c0)\"" pod="openshift-machine-config-operator/machine-config-daemon-rcgbx" podUID="67525d77-715e-4ec3-bdbb-6854657355c0"